[jira] [Commented] (HBASE-17688) MultiRowRangeFilter not working correctly if given same start and stop RowKey

2017-02-23 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882171#comment-15882171
 ] 

Jingcheng Du commented on HBASE-17688:
--

This issue exists in other branches too. It is caused by the issue in 
{{RowRange.contains}} by wrongly using isScan.
Hi [~ahujaravi1], do you want provide the patch? Or I can do it as well. Thanks.

> MultiRowRangeFilter not working correctly if given same start and stop RowKey
> -
>
> Key: HBASE-17688
> URL: https://issues.apache.org/jira/browse/HBASE-17688
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
>Reporter: Ravi Ahuj
>Priority: Minor
>
>   
>   
> try (final Connection conn = ConnectionFactory.createConnection(conf);
>final Table scanTable = conn.getTable(table)){
>   ArrayList rowRangesList = new 
> ArrayList<>();  
>
> String startRowkey="abc";
> String stopRowkey="abc";
>   rowRangesList.add(new MultiRowRangeFilter.RowRange(startRowkey, 
> true, stopRowkey, true));
>   Scan scan = new Scan();
>   scan.setFilter(new MultiRowRangeFilter(rowRangesList));
>   
>ResultScanner scanner=scanTable.getScanner(scan);
>
> for (Result result : scanner) {
> String rowkey=new String(result.getRow());
>System.out.println(rowkey);
>
> } 
> }
>   
> In Hbase API of Java, we want to do multiple scans in the table using 
> MultiRowRangeFilter.
> When we give multiple filters of startRowKey and stopRowKey, it is not 
> working Properly with same StartRowKey and StopRowkey.
> Ideally, it should give only one Row with that Rowkey, but instead it is 
> giving all the rows starting from that Rowkey in that Hbase Table



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17691) Add ScanMetrics support for async scan

2017-02-23 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-17691:
-

 Summary: Add ScanMetrics support for async scan
 Key: HBASE-17691
 URL: https://issues.apache.org/jira/browse/HBASE-17691
 Project: HBase
  Issue Type: Sub-task
Reporter: Duo Zhang






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17338) Treat Cell data size under global memstore heap size only when that Cell can not be copied to MSLAB

2017-02-23 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882169#comment-15882169
 ] 

ramkrishna.s.vasudevan commented on HBASE-17338:


Rest is all good. But as discussed internally that isOffheap() addition to 
MSLAB is better so that we can really avoid adding the dataSize to the 
MemstoreSize when the memstore is offheap but still the entire Cell is onheap. 
Now currently we just account for the dataSize also. We can do in another JIRA. 
+1 otherwise.

> Treat Cell data size under global memstore heap size only when that Cell can 
> not be copied to MSLAB
> ---
>
> Key: HBASE-17338
> URL: https://issues.apache.org/jira/browse/HBASE-17338
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-17338.patch, HBASE-17338_V2.patch, 
> HBASE-17338_V2.patch, HBASE-17338_V4.patch
>
>
> We have only data size and heap overhead being tracked globally.  Off heap 
> memstore works with off heap backed MSLAB pool.  But a cell, when added to 
> memstore, not always getting copied to MSLAB.  Append/Increment ops doing an 
> upsert, dont use MSLAB.  Also based on the Cell size, we sometimes avoid 
> MSLAB copy.  But now we track these cell data size also under the global 
> memstore data size which indicated off heap size in case of off heap 
> memstore.  For global checks for flushes (against lower/upper watermark 
> levels), we check this size against max off heap memstore size.  We do check 
> heap overhead against global heap memstore size (Defaults to 40% of xmx)  But 
> for such cells the data size also should be accounted under the heap overhead.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17680) Run mini cluster through JNI in tests

2017-02-23 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882164#comment-15882164
 ] 

Devaraj Das commented on HBASE-17680:
-

I think for now it's fine to make the whole thing work via Buck (that assumes 
docker). For the Makefile based builds we can read JAVA_HOME, etc. It makes 
sense to write the Java wrapper for the HTU that combines various operations, 
and write a thinner layer for the JNI.

> Run mini cluster through JNI in tests
> -
>
> Key: HBASE-17680
> URL: https://issues.apache.org/jira/browse/HBASE-17680
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 17680.v1.txt, 17680.v3.txt, 17680.v8.txt
>
>
> Currently tests start local hbase cluster through hbase shell.
> There is less control over the configuration of the local cluster this way.
> This issue would replace hbase shell with JNI interface to mini cluster.
> We would have full control over the cluster behavior.
> Thanks to [~devaraj] who started this initiative.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17682) Region stuck in merging_new state indefinitely

2017-02-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882162#comment-15882162
 ] 

Hudson commented on HBASE-17682:


SUCCESS: Integrated in Jenkins build HBase-1.1-JDK8 #1933 (See 
[https://builds.apache.org/job/HBase-1.1-JDK8/1933/])
HBASE-17682 Region stuck in merging_new state indefinitely (apurtell: rev 
1a81a27ac08bd56300c9a6e9aa208b7266b1493a)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java


> Region stuck in merging_new state indefinitely
> --
>
> Key: HBASE-17682
> URL: https://issues.apache.org/jira/browse/HBASE-17682
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.5, 1.1.10
>
> Attachments: HBASE-17682.branch-1.3.001.patch, 
> HBASE-17682.master.001.patch
>
>
> Ran into issue while tinkering around with a chaos monkey that did splits, 
> merges and kills exclusively, which resulted in regions getting stuck in 
> transition in merging new state indefinitely which i think happens when the 
> rs is killed during the merge but before the ponr, in which case the new 
> regions state in master is merging new. When the rs dies at this point the 
> master executes RegionStates.serverOffline() for the rs which does
> {code}
> for (RegionState state : regionsInTransition.values()) {
> HRegionInfo hri = state.getRegion();
> if (assignedRegions.contains(hri)) {
>   // Region is open on this region server, but in transition.
>   // This region must be moving away from this server, or 
> splitting/merging.
>   // SSH will handle it, either skip assigning, or re-assign.
>   LOG.info("Transitioning " + state + " will be handled by 
> ServerCrashProcedure for " + sn);
> } else if (sn.equals(state.getServerName())) {
>   // Region is in transition on this region server, and this
>   // region is not open on this server. So the region must be
>   // moving to this server from another one (i.e. opening or
>   // pending open on this server, was open on another one.
>   // Offline state is also kind of pending open if the region is in
>   // transition. The region could be in failed_close state too if we 
> have
>   // tried several times to open it while this region server is not 
> reachable)
>   if (state.isPendingOpenOrOpening() || state.isFailedClose() || 
> state.isOffline()) {
> LOG.info("Found region in " + state +
>   " to be reassigned by ServerCrashProcedure for " + sn);
> rits.add(hri);
>   } else if(state.isSplittingNew()) {
> regionsToCleanIfNoMetaEntry.add(state.getRegion());
>   } else {
> LOG.warn("THIS SHOULD NOT HAPPEN: unexpected " + state);
>   }
> }
>   }
> {code}
> We donot handle merging new here and end up with "THIS SHOULD NOT HAPPEN: 
> unexpected ...". Post this we have the new region which does not have any 
> data stuck which leads to the balancer not running.
> I think we should handle mergingnew the same way as splittingnew. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17312) [JDK8] Use default method for Observer Coprocessors

2017-02-23 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-17312:
-
Attachment: HBASE-17312.master.005.patch

> [JDK8] Use default method for Observer Coprocessors
> ---
>
> Key: HBASE-17312
> URL: https://issues.apache.org/jira/browse/HBASE-17312
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>Assignee: Appy
>  Labels: incompatible
> Attachments: HBASE-17312.master.001.patch, 
> HBASE-17312.master.001.patch, HBASE-17312.master.002.patch, 
> HBASE-17312.master.003.patch, HBASE-17312.master.004.patch, 
> HBASE-17312.master.005.patch
>
>
> In cases where one might need to use multiple observers, say region, master 
> and regionserver; and the fact that only one class can be extended, it gives 
> rise to following pattern:
> {noformat}
> public class BaseMasterAndRegionObserver
>   extends BaseRegionObserver
>   implements MasterObserver
> class AccessController
>   extends BaseMasterAndRegionObserver
>   implements RegionServerObserver
> {noformat}
> were BaseMasterAndRegionObserver is full copy of BaseMasterObserver.
>  There is an example of simple case too where the current design fails.
>  Say only one observer is needed by the coprocessor, but the design doesn't 
> permit extending even that single observer (see RSGroupAdminEndpoint), that 
> leads to full copy of Base...Observer class into coprocessor class leading to 
> 1000s of lines of code and this ugly mix of 5 main functions with 100 useless 
> functions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16991) Make the initialization of AsyncConnection asynchronous

2017-02-23 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-16991:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to master.

Thanks all for reviewing.

> Make the initialization of AsyncConnection asynchronous
> ---
>
> Key: HBASE-16991
> URL: https://issues.apache.org/jira/browse/HBASE-16991
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16991.patch, HBASE-16991-v1.patch, 
> HBASE-16991-v2.patch, HBASE-16991-v3.patch
>
>
> Now the ConnectionFactory.createAsyncConnection is still blocking. We should 
> make it return a CompletableFuture to make the async client fully 
> asynchronous.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16859) Use Bytebuffer pool for non java clients specifically for scans/gets

2017-02-23 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-16859:
---
Status: Patch Available  (was: Open)

> Use Bytebuffer pool for non java clients specifically for scans/gets
> 
>
> Key: HBASE-16859
> URL: https://issues.apache.org/jira/browse/HBASE-16859
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-16859_V1.patch, HBASE-16859_V2.patch, 
> HBASE-16859_V2.patch, HBASE-16859_V4.patch, HBASE-16859_V5.patch, 
> HBASE-16859_V6.patch, HBASE-16859_V7.patch
>
>
> In case of non java clients we still write the results and header into a on 
> demand  byte[]. This can be changed to use the BBPool (onheap or offheap 
> buffer?).
> But the basic problem is to identify if the response is for scans/gets. 
> - One easy way to do it is use the MethodDescriptor per Call and use the   
> name of the MethodDescriptor to identify it is a scan/get. But this will 
> pollute RpcServer by checking for scan/get type response.
> - Other way is always set the result to cellScanner but we know that 
> isClientCellBlockSupported is going to false for non PB clients. So ignore 
> the cellscanner and go ahead with the results in PB. But this is not clean
> - third one is that we already have a RpccallContext being passed to the RS. 
> In case of scan/gets/multiGets we already set a Rpccallback for shipped call. 
> So here on response we can check if the callback is not null and check for 
> isclientBlockSupported. In this case we can get the BB from the pool and 
> write the result and header to that BB. May be this looks clean?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16859) Use Bytebuffer pool for non java clients specifically for scans/gets

2017-02-23 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-16859:
---
Attachment: HBASE-16859_V7.patch

Rebased patch for trunk. 

> Use Bytebuffer pool for non java clients specifically for scans/gets
> 
>
> Key: HBASE-16859
> URL: https://issues.apache.org/jira/browse/HBASE-16859
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-16859_V1.patch, HBASE-16859_V2.patch, 
> HBASE-16859_V2.patch, HBASE-16859_V4.patch, HBASE-16859_V5.patch, 
> HBASE-16859_V6.patch, HBASE-16859_V7.patch
>
>
> In case of non java clients we still write the results and header into a on 
> demand  byte[]. This can be changed to use the BBPool (onheap or offheap 
> buffer?).
> But the basic problem is to identify if the response is for scans/gets. 
> - One easy way to do it is use the MethodDescriptor per Call and use the   
> name of the MethodDescriptor to identify it is a scan/get. But this will 
> pollute RpcServer by checking for scan/get type response.
> - Other way is always set the result to cellScanner but we know that 
> isClientCellBlockSupported is going to false for non PB clients. So ignore 
> the cellscanner and go ahead with the results in PB. But this is not clean
> - third one is that we already have a RpccallContext being passed to the RS. 
> In case of scan/gets/multiGets we already set a Rpccallback for shipped call. 
> So here on response we can check if the callback is not null and check for 
> isclientBlockSupported. In this case we can get the BB from the pool and 
> write the result and header to that BB. May be this looks clean?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16859) Use Bytebuffer pool for non java clients specifically for scans/gets

2017-02-23 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-16859:
---
Status: Open  (was: Patch Available)

> Use Bytebuffer pool for non java clients specifically for scans/gets
> 
>
> Key: HBASE-16859
> URL: https://issues.apache.org/jira/browse/HBASE-16859
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-16859_V1.patch, HBASE-16859_V2.patch, 
> HBASE-16859_V2.patch, HBASE-16859_V4.patch, HBASE-16859_V5.patch, 
> HBASE-16859_V6.patch, HBASE-16859_V7.patch
>
>
> In case of non java clients we still write the results and header into a on 
> demand  byte[]. This can be changed to use the BBPool (onheap or offheap 
> buffer?).
> But the basic problem is to identify if the response is for scans/gets. 
> - One easy way to do it is use the MethodDescriptor per Call and use the   
> name of the MethodDescriptor to identify it is a scan/get. But this will 
> pollute RpcServer by checking for scan/get type response.
> - Other way is always set the result to cellScanner but we know that 
> isClientCellBlockSupported is going to false for non PB clients. So ignore 
> the cellscanner and go ahead with the results in PB. But this is not clean
> - third one is that we already have a RpccallContext being passed to the RS. 
> In case of scan/gets/multiGets we already set a Rpccallback for shipped call. 
> So here on response we can check if the callback is not null and check for 
> isclientBlockSupported. In this case we can get the BB from the pool and 
> write the result and header to that BB. May be this looks clean?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17374) ZKPermissionWatcher crashed when grant after region close

2017-02-23 Thread Liu Junhong (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882130#comment-15882130
 ] 

Liu Junhong commented on HBASE-17374:
-

Our ACL configuration is configed as below,  it is configed since the hbase 
version was 0.94, and result in this issue.

hbase.coprocessor.region.classes

org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.AccessController,org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint,org.apache.hadoop.hbase.regionserver.ExternalMetricObserver



> ZKPermissionWatcher crashed when grant after region close
> -
>
> Key: HBASE-17374
> URL: https://issues.apache.org/jira/browse/HBASE-17374
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.15
>Reporter: Liu Junhong
>Assignee: Liu Junhong
>Priority: Critical
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 0001-fix-for-HBASE-17374-20161228.patch, 
> 0001-fix-for-HBASE-17374.patch
>
>
> It was occurred many time that  I granted some permission,  but few of some 
> regionservers was not token effect and must be restart . When I look up logs, 
>  I found that :
> 2016-12-08 15:00:26,238 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] handler.CloseRegionHandler 
> (CloseRegionHandler.java:process(128)) - Processing close of 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.
> {color:red} 2016-12-08 15:00:26,242 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:doClose(1163)) - Closing 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.: disabling 
> compactions & flushes {color}
> 2016-12-08 15:00:26,242 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:doClose(1190)) - Updates disabled for region 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.
> 2016-12-08 15:00:26,242 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:internalFlushcache(1753)) - Started memstore flush for 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14., current 
> region memstore size 160
> 2016-12-08 15:00:26,284 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
> regionserver.DefaultStoreFlusher (DefaultStoreFlusher.java:flushSnapshot(95)) 
> - Flushed, sequenceid=6, memsize=160, hasBloomFilter=true, into tmp file 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/.tmp/8d734ce3d93e40628d8f82111e754cb3
> 2016-12-08 15:00:26,303 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
> regionserver.HRegionFileSystem (HRegionFileSystem.java:commitStoreFile(370)) 
> - Committing store file 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/.tmp/8d734ce3d93e40628d8f82111e754cb3
>  as 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/cf2/8d734ce3d93e40628d8f82111e754cb3
> 2016-12-08 15:00:26,318 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HStore 
> (HStore.java:commitFile(877)) - Added 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/cf2/8d734ce3d93e40628d8f82111e754cb3,
>  entries=1, sequenceid=6, filesize=985
> 2016-12-08 15:00:26,319 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:internalFlushcache(1920)) - Finished memstore flush of 
> ~160/160, currentsize=0/0 for region 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14. in 77ms, 
> sequenceid=6, compaction requested=false
> 2016-12-08 15:00:26,323 INFO  
> [StoreCloserThread-data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.-1]
>  regionserver.HStore (HStore.java:close(774)) - Closed cf1
> 2016-12-08 15:00:26,325 INFO  
> [StoreCloserThread-data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.-1]
>  regionserver.HStore (HStore.java:close(774)) - Closed cf2
> 2016-12-08 15:00:26,326 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.token.TokenProvider
> {color:red}  2016-12-08 15:00:26,326 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.access.AccessController  {color}
> 2016-12-08 15:00:26,327 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> 

[jira] [Commented] (HBASE-17460) enable_table_replication can not perform cyclic replication of a table

2017-02-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882122#comment-15882122
 ] 

Hudson commented on HBASE-17460:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #2561 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2561/])
HBASE-17460 enable_table_replication can not perform cyclic replication (tedyu: 
rev 371f2bd9071da0b56565df65c27024c0776942a1)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java


> enable_table_replication can not perform cyclic replication of a table
> --
>
> Key: HBASE-17460
> URL: https://issues.apache.org/jira/browse/HBASE-17460
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: NITIN VERMA
>Assignee: NITIN VERMA
>  Labels: incompatibleChange, replication
> Fix For: 2.0.0
>
> Attachments: 17460-addendum.txt, 17460-addendum.v2.txt, 
> 17460.branch-1.v3.txt, 17460.v5.txt, HBASE-17460.patch, HBASE-17460_v2.patch, 
> HBASE-17460_v3.patch, HBASE-17460_v4.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> The enable_table_replication operation is broken for cyclic replication of 
> HBase table as we compare all the properties of column families (including 
> REPLICATION_SCOPE). 
> Below is exactly what happens:
> 1.  Running "enable_table_replication 'table1'  " opeartion on first cluster 
> will set the REPLICATION_SCOPE of all column families to peer id '1'. This 
> will also create a table on second cluster where REPLICATION_SCOPE is still 
> set to peer id '0'.
> 2. Now when we run "enable_table_replication 'table1'" on second cluster, we 
> compare all the properties of table (including REPLICATION_SCOPE_, which 
> obviously is different now. 
> I am proposing a fix for this issue where we should avoid comparing 
> REPLICATION_SCOPE inside HColumnDescriotor::compareTo() method, especially 
> when replication is not already enabled on the desired table.
> I have made that change and it is working. I will submit the patch soon.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16991) Make the initialization of AsyncConnection asynchronous

2017-02-23 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882108#comment-15882108
 ] 

Duo Zhang commented on HBASE-16991:
---

The failed UTs are unrelated. Will commit shortly.

> Make the initialization of AsyncConnection asynchronous
> ---
>
> Key: HBASE-16991
> URL: https://issues.apache.org/jira/browse/HBASE-16991
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16991.patch, HBASE-16991-v1.patch, 
> HBASE-16991-v2.patch, HBASE-16991-v3.patch
>
>
> Now the ConnectionFactory.createAsyncConnection is still blocking. We should 
> make it return a CompletableFuture to make the async client fully 
> asynchronous.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17690) Clean up MOB code

2017-02-23 Thread Jingcheng Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingcheng Du updated HBASE-17690:
-
Status: Patch Available  (was: Open)

> Clean up MOB code
> -
>
> Key: HBASE-17690
> URL: https://issues.apache.org/jira/browse/HBASE-17690
> Project: HBase
>  Issue Type: Improvement
>  Components: mob
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HBASE-17690.patch
>
>
> Clean up the code in MOB.
> # Fix the incorrect description in comments.
> # Fix the warning and remove redundant import in code.
> # Remove the references to the deprecated code.
> # Add throughput controller for DefaultMobStoreFlusher and 
> DefaultMobStoreCompactor.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17690) Clean up MOB code

2017-02-23 Thread Jingcheng Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingcheng Du updated HBASE-17690:
-
Description: 
Clean up the code in MOB.
# Fix the incorrect description in comments.
# Fix the warning and remove redundant import in code.
# Remove the references to the deprecated code.
# Add throughput controller for DefaultMobStoreFlusher and 
DefaultMobStoreCompactor.

  was:
Clean up the code in MOB.
# Fix the incorrect description in comments.
# Fix the warning and remove redundant reference in code.
# Correct the code used in uni test.
# Add throughput controller for DefaultMobStoreFlusher and 
DefaultMobStoreCompactor.


> Clean up MOB code
> -
>
> Key: HBASE-17690
> URL: https://issues.apache.org/jira/browse/HBASE-17690
> Project: HBase
>  Issue Type: Improvement
>  Components: mob
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HBASE-17690.patch
>
>
> Clean up the code in MOB.
> # Fix the incorrect description in comments.
> # Fix the warning and remove redundant import in code.
> # Remove the references to the deprecated code.
> # Add throughput controller for DefaultMobStoreFlusher and 
> DefaultMobStoreCompactor.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17690) Clean up MOB code

2017-02-23 Thread Jingcheng Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingcheng Du updated HBASE-17690:
-
Attachment: HBASE-17690.patch

Upload the first patch for review.

> Clean up MOB code
> -
>
> Key: HBASE-17690
> URL: https://issues.apache.org/jira/browse/HBASE-17690
> Project: HBase
>  Issue Type: Improvement
>  Components: mob
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HBASE-17690.patch
>
>
> Clean up the code in MOB.
> # Fix the incorrect description in comments.
> # Fix the warning and remove redundant reference in code.
> # Correct the code used in uni test.
> # Add throughput controller for DefaultMobStoreFlusher and 
> DefaultMobStoreCompactor.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17690) Clean up MOB code

2017-02-23 Thread Jingcheng Du (JIRA)
Jingcheng Du created HBASE-17690:


 Summary: Clean up MOB code
 Key: HBASE-17690
 URL: https://issues.apache.org/jira/browse/HBASE-17690
 Project: HBase
  Issue Type: Improvement
  Components: mob
Reporter: Jingcheng Du
Assignee: Jingcheng Du


Clean up the code in MOB.
# Fix the incorrect description in comments.
# Fix the warning and remove redundant reference in code.
# Correct the code used in uni test.
# Add throughput controller for DefaultMobStoreFlusher and 
DefaultMobStoreCompactor.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17584) Expose ScanMetrics with ResultScanner rather than Scan

2017-02-23 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17584:
--
Status: Patch Available  (was: Open)

> Expose ScanMetrics with ResultScanner rather than Scan
> --
>
> Key: HBASE-17584
> URL: https://issues.apache.org/jira/browse/HBASE-17584
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, mapreduce, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17584.patch
>
>
> I think this have been discussed many times... It is a bad practice to 
> directly modify the Scan object passed in when calling getScanner. The reason 
> that we can not use a copy is we need to use the Scan object to expose scan 
> metrics. So we need to find another way to expose the metrics.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17584) Expose ScanMetrics with ResultScanner rather than Scan

2017-02-23 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17584:
--
Attachment: HBASE-17584.patch

Add a reset flag for ScanMetrics.getScanMetrics. Do not reset ScanMetrics when 
publish it to the Scan object. So you can still Scan.getScanMetrics to get the 
metrics. But if you use ResultScanner.getScanMetrics and reset the counters, 
the metrics published to Scan object will be messed up. But I think this is 
acceptable? If you use ResultScanner.getScanMetrics get the metrics, then you 
do not need to use Scan.getScanMetrics anymore.

> Expose ScanMetrics with ResultScanner rather than Scan
> --
>
> Key: HBASE-17584
> URL: https://issues.apache.org/jira/browse/HBASE-17584
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, mapreduce, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17584.patch
>
>
> I think this have been discussed many times... It is a bad practice to 
> directly modify the Scan object passed in when calling getScanner. The reason 
> that we can not use a copy is we need to use the Scan object to expose scan 
> metrics. So we need to find another way to expose the metrics.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16991) Make the initialization of AsyncConnection asynchronous

2017-02-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882095#comment-15882095
 ] 

Hadoop QA commented on HBASE-16991:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 38s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 44s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
21s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 49s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
30s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
8s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
55s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
0s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
39m 36s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 41s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 109m 12s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 33s 
{color} | {color:green} hbase-endpoint in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 
6s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 191m 19s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestAsyncAdmin |
| Timed out junit tests | 
org.apache.hadoop.hbase.regionserver.TestRegionReplicas |
|   | org.apache.hadoop.hbase.regionserver.TestRegionMergeTransactionOnCluster |
|   | org.apache.hadoop.hbase.regionserver.TestStore |
|   | org.apache.hadoop.hbase.regionserver.TestCompactionState |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.13.1 Server=1.13.1 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12854369/HBASE-16991-v3.patch |
| JIRA Issue | HBASE-16991 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 6c50a22de750 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HBASE-17689) hbase thrift2 THBaseservice support table.existsAll

2017-02-23 Thread Yechao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yechao Chen updated HBASE-17689:

Labels: thrift2  (was: )

> hbase thrift2 THBaseservice support table.existsAll
> ---
>
> Key: HBASE-17689
> URL: https://issues.apache.org/jira/browse/HBASE-17689
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Yechao Chen
>  Labels: thrift2
>
> hbase thrift2  support  existsAll(List gets) throws IOException;
> hbase.thrift add a method to service THBaseService like this
> list existsAll(
>   1: required binary table,
>   2: required list tgets
> ) throws (1:TIOError io)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17312) [JDK8] Use default method for Observer Coprocessors

2017-02-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882080#comment-15882080
 ] 

Hadoop QA commented on HBASE-17312:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 53 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
54s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 33s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
37s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
38s {color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
28s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 36s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 22s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
21s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 25s 
{color} | {color:red} hbase-server generated 2 new + 1 unchanged - 0 fixed = 3 
total (was 1) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 59s 
{color} | {color:red} root generated 2 new + 19 unchanged - 0 fixed = 21 total 
(was 19) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 25s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 15s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 48s {color} 
| {color:red} hbase-thrift in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 40s 
{color} | {color:green} hbase-rsgroup in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 4m 21s {color} 
| {color:red} hbase-endpoint in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 21s 
{color} | {color:green} hbase-it in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s 
{color} | {color:green} hbase-examples in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 119m 52s 
{color} | {color:red} root in the patch failed. 

[jira] [Commented] (HBASE-17689) hbase thrift2 THBaseservice support table.existsAll

2017-02-23 Thread Yechao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882077#comment-15882077
 ] 

Yechao Chen commented on HBASE-17689:
-

[~xieliang007] I want to add this method to THBaseservice,please have a look.

> hbase thrift2 THBaseservice support table.existsAll
> ---
>
> Key: HBASE-17689
> URL: https://issues.apache.org/jira/browse/HBASE-17689
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Yechao Chen
>
> hbase thrift2  support  existsAll(List gets) throws IOException;
> hbase.thrift add a method to service THBaseService like this
> list existsAll(
>   1: required binary table,
>   2: required list tgets
> ) throws (1:TIOError io)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-17689) hbase thrift2 THBaseservice support table.existsAll

2017-02-23 Thread Yechao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882066#comment-15882066
 ] 

Yechao Chen edited comment on HBASE-17689 at 2/24/17 6:42 AM:
--

ThriftHBaseServiceHandler.java  need add a method  like 

  @Override
  public List existsAll(ByteBuffer table, List gets) throws 
TIOError, TException {
Table htable = getTable(table);
try {
  boolean[] exists = htable.existsAll(getsFromThrift(gets));
  List result = new ArrayList<>();
  for (boolean exist : exists) {
result.add(exist);
  }
  return result;
} catch (IOException e) {
  throw getTIOError(e);
} finally {
  closeTable(htable);
}
  }



was (Author: chenyechao):
ThriftHBaseServiceHandler.java add a method 
  @Override
  public List existsAll(ByteBuffer table, List gets) throws 
TIOError, TException {
Table htable = getTable(table);
try {
  boolean[] exists = htable.existsAll(getsFromThrift(gets));
  List result = new ArrayList<>();
  for (boolean exist : exists) {
result.add(exist);
  }
  return result;
} catch (IOException e) {
  throw getTIOError(e);
} finally {
  closeTable(htable);
}
  }


> hbase thrift2 THBaseservice support table.existsAll
> ---
>
> Key: HBASE-17689
> URL: https://issues.apache.org/jira/browse/HBASE-17689
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Yechao Chen
>
> hbase thrift2  support  existsAll(List gets) throws IOException;
> hbase.thrift add a method to service THBaseService like this
> list existsAll(
>   1: required binary table,
>   2: required list tgets
> ) throws (1:TIOError io)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17689) hbase thrift2 THBaseservice support table.existsAll

2017-02-23 Thread Yechao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882066#comment-15882066
 ] 

Yechao Chen commented on HBASE-17689:
-

ThriftHBaseServiceHandler.java add a method 
  @Override
  public List existsAll(ByteBuffer table, List gets) throws 
TIOError, TException {
Table htable = getTable(table);
try {
  boolean[] exists = htable.existsAll(getsFromThrift(gets));
  List result = new ArrayList<>();
  for (boolean exist : exists) {
result.add(exist);
  }
  return result;
} catch (IOException e) {
  throw getTIOError(e);
} finally {
  closeTable(htable);
}
  }


> hbase thrift2 THBaseservice support table.existsAll
> ---
>
> Key: HBASE-17689
> URL: https://issues.apache.org/jira/browse/HBASE-17689
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Yechao Chen
>
> hbase thrift2  support  existsAll(List gets) throws IOException;
> hbase.thrift add a method to service THBaseService like this
> list existsAll(
>   1: required binary table,
>   2: required list tgets
> ) throws (1:TIOError io)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17689) hbase thrift2 THBaseservice support table.existsAll

2017-02-23 Thread Yechao Chen (JIRA)
Yechao Chen created HBASE-17689:
---

 Summary: hbase thrift2 THBaseservice support table.existsAll
 Key: HBASE-17689
 URL: https://issues.apache.org/jira/browse/HBASE-17689
 Project: HBase
  Issue Type: Improvement
  Components: Thrift
Reporter: Yechao Chen


hbase thrift2  support  existsAll(List gets) throws IOException;

hbase.thrift add a method to service THBaseService like this
list existsAll(
  1: required binary table,
  2: required list tgets
) throws (1:TIOError io)




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17495) TestHRegionWithInMemoryFlush#testFlushCacheWhileScanning intermittently fails due to assertion error

2017-02-23 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882045#comment-15882045
 ] 

ramkrishna.s.vasudevan commented on HBASE-17495:


[~anastas], [~eshcar] - FYI.
I can check this if you are not looking in to this.

> TestHRegionWithInMemoryFlush#testFlushCacheWhileScanning intermittently fails 
> due to assertion error
> 
>
> Key: HBASE-17495
> URL: https://issues.apache.org/jira/browse/HBASE-17495
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
> Attachments: 17495-testHRegionWithInMemoryFlush-output-2.0123, 
> testHRegionWithInMemoryFlush-flush-output.0123, 
> TestHRegionWithInMemoryFlush-out.0222.tar.gz, 
> testHRegionWithInMemoryFlush-output.0119
>
>
> Looping through the test (based on commit 
> 76dc957f64fa38ce88694054db7dbf590f368ae7), I saw the following test failure:
> {code}
> testFlushCacheWhileScanning(org.apache.hadoop.hbase.regionserver.TestHRegionWithInMemoryFlush)
>   Time elapsed: 0.53 sec  <<< FAILURE!
> java.lang.AssertionError: toggle=false i=940 ts=1484852861597 expected:<94> 
> but was:<92>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHRegion.testFlushCacheWhileScanning(TestHRegion.java:3533)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> {code}
> See test output for details.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HBASE-17584) Expose ScanMetrics with ResultScanner rather than Scan

2017-02-23 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang reassigned HBASE-17584:
-

Assignee: Duo Zhang

> Expose ScanMetrics with ResultScanner rather than Scan
> --
>
> Key: HBASE-17584
> URL: https://issues.apache.org/jira/browse/HBASE-17584
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, mapreduce, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0, 1.4.0
>
>
> I think this have been discussed many times... It is a bad practice to 
> directly modify the Scan object passed in when calling getScanner. The reason 
> that we can not use a copy is we need to use the Scan object to expose scan 
> metrics. So we need to find another way to expose the metrics.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17682) Region stuck in merging_new state indefinitely

2017-02-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882036#comment-15882036
 ] 

Hudson commented on HBASE-17682:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK8 #124 (See 
[https://builds.apache.org/job/HBase-1.3-JDK8/124/])
HBASE-17682 Region stuck in merging_new state indefinitely (apurtell: rev 
bd438aadc8cc88047547656194f4717ad8e89915)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java


> Region stuck in merging_new state indefinitely
> --
>
> Key: HBASE-17682
> URL: https://issues.apache.org/jira/browse/HBASE-17682
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.5, 1.1.10
>
> Attachments: HBASE-17682.branch-1.3.001.patch, 
> HBASE-17682.master.001.patch
>
>
> Ran into issue while tinkering around with a chaos monkey that did splits, 
> merges and kills exclusively, which resulted in regions getting stuck in 
> transition in merging new state indefinitely which i think happens when the 
> rs is killed during the merge but before the ponr, in which case the new 
> regions state in master is merging new. When the rs dies at this point the 
> master executes RegionStates.serverOffline() for the rs which does
> {code}
> for (RegionState state : regionsInTransition.values()) {
> HRegionInfo hri = state.getRegion();
> if (assignedRegions.contains(hri)) {
>   // Region is open on this region server, but in transition.
>   // This region must be moving away from this server, or 
> splitting/merging.
>   // SSH will handle it, either skip assigning, or re-assign.
>   LOG.info("Transitioning " + state + " will be handled by 
> ServerCrashProcedure for " + sn);
> } else if (sn.equals(state.getServerName())) {
>   // Region is in transition on this region server, and this
>   // region is not open on this server. So the region must be
>   // moving to this server from another one (i.e. opening or
>   // pending open on this server, was open on another one.
>   // Offline state is also kind of pending open if the region is in
>   // transition. The region could be in failed_close state too if we 
> have
>   // tried several times to open it while this region server is not 
> reachable)
>   if (state.isPendingOpenOrOpening() || state.isFailedClose() || 
> state.isOffline()) {
> LOG.info("Found region in " + state +
>   " to be reassigned by ServerCrashProcedure for " + sn);
> rits.add(hri);
>   } else if(state.isSplittingNew()) {
> regionsToCleanIfNoMetaEntry.add(state.getRegion());
>   } else {
> LOG.warn("THIS SHOULD NOT HAPPEN: unexpected " + state);
>   }
> }
>   }
> {code}
> We donot handle merging new here and end up with "THIS SHOULD NOT HAPPEN: 
> unexpected ...". Post this we have the new region which does not have any 
> data stuck which leads to the balancer not running.
> I think we should handle mergingnew the same way as splittingnew. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15314) Allow more than one backing file in bucketcache

2017-02-23 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882021#comment-15882021
 ] 

ramkrishna.s.vasudevan commented on HBASE-15314:


[~zjushch]
I think in your .java file that you have attached it is not complete. Can you 
prepare as a patch? There are some changes in BucketAllocator for 
enabling/disabling alloction range and also evict blocks in that range.

> Allow more than one backing file in bucketcache
> ---
>
> Key: HBASE-15314
> URL: https://issues.apache.org/jira/browse/HBASE-15314
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: Aaron Tokhy
> Attachments: FileIOEngine.java, HBASE-15314.master.001.patch, 
> HBASE-15314.master.001.patch, HBASE-15314.patch, HBASE-15314-v2.patch, 
> HBASE-15314-v3.patch
>
>
> Allow bucketcache use more than just one backing file: e.g. chassis has more 
> than one SSD in it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16630) Fragmentation in long running Bucket Cache

2017-02-23 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882011#comment-15882011
 ] 

ramkrishna.s.vasudevan commented on HBASE-16630:


Which of the above patches should be committed? [~tedyu] and [~dvdreddy]?
I can see V3 version 2 patches one with 'suggest' suffix.

> Fragmentation in long running Bucket Cache
> --
>
> Key: HBASE-16630
> URL: https://issues.apache.org/jira/browse/HBASE-16630
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 2.0.0, 1.1.6, 1.3.1, 1.2.3
>Reporter: deepankar
>Assignee: deepankar
>Priority: Critical
> Attachments: 16630-v2-suggest.patch, 16630-v3-suggest.patch, 
> HBASE-16630.patch, HBASE-16630-v2.patch, HBASE-16630-v3.patch
>
>
> As we are running bucket cache for a long time in our system, we are 
> observing cases where some nodes after some time does not fully utilize the 
> bucket cache, in some cases it is even worse in the sense they get stuck at a 
> value < 0.25 % of the bucket cache (DEFAULT_MEMORY_FACTOR as all our tables 
> are configured in-memory for simplicity sake).
> We took a heap dump and analyzed what is happening and saw that is classic 
> case of fragmentation, current implementation of BucketCache (mainly 
> BucketAllocator) relies on the logic that fullyFreeBuckets are available for 
> switching/adjusting cache usage between different bucketSizes . But once a 
> compaction / bulkload happens and the blocks are evicted from a bucket size , 
> these are usually evicted from random places of the buckets of a bucketSize 
> and thus locking the number of buckets associated with a bucketSize and in 
> the worst case of the fragmentation we have seen some bucketSizes with 
> occupancy ratio of <  10 % But they dont have any completelyFreeBuckets to 
> share with the other bucketSize. 
> Currently the existing eviction logic helps in the cases where cache used is 
> more the MEMORY_FACTOR or MULTI_FACTOR and once those evictions are also 
> done, the eviction (freeSpace function) will not evict anything and the cache 
> utilization will be stuck at that value without any allocations for other 
> required sizes.
> The fix for this we came up with is simple that we do deFragmentation ( 
> compaction) of the bucketSize and thus increasing the occupancy ratio and 
> also freeing up the buckets to be fullyFree, this logic itself is not 
> complicated as the bucketAllocator takes care of packing the blocks in the 
> buckets, we need evict and re-allocate the blocks for all the BucketSizes 
> that dont fit the criteria.
> I am attaching an initial patch just to give an idea of what we are thinking 
> and I'll improve it based on the comments from the community.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17662) Disable in-memory flush when replaying from WAL

2017-02-23 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882009#comment-15882009
 ] 

ramkrishna.s.vasudevan commented on HBASE-17662:


+1 on this patch if we agree with moving the check under the 'if' condition 
that checks for the size.

> Disable in-memory flush when replaying from WAL
> ---
>
> Key: HBASE-17662
> URL: https://issues.apache.org/jira/browse/HBASE-17662
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-17662-V02.patch, HBASE-17662-V03.patch, 
> HBASE-17662-V04.patch, HBASE-17662-V05.patch
>
>
> When replaying the edits from WAL, the region's updateLock is not taken, 
> because a single threaded action is assumed. However, the thread-safeness of 
> the in-memory flush of CompactingMemStore is based on taking the region's 
> updateLock. 
> The in-memory flush can be skipped in the replay time (anyway everything is 
> flushed to disk just after the replay). Therefore it is acceptable to just 
> skip the in-memory flush action while the updates come as part of replay from 
> WAL.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17460) enable_table_replication can not perform cyclic replication of a table

2017-02-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881993#comment-15881993
 ] 

Hadoop QA commented on HBASE-17460:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 7s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
31s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 1s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
29m 16s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 20s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
9s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 33s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12854374/17460-addendum.v2.txt 
|
| JIRA Issue | HBASE-17460 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux e161341c8982 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / c90d484 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5823/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5823/testReport/ |
| modules | C: hbase-client U: hbase-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5823/console |
| Powered by | Apache Yetus 0.3.0   

[jira] [Commented] (HBASE-16630) Fragmentation in long running Bucket Cache

2017-02-23 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881995#comment-15881995
 ] 

ramkrishna.s.vasudevan commented on HBASE-16630:


+1 for commit to master and branch - 1.2. 

> Fragmentation in long running Bucket Cache
> --
>
> Key: HBASE-16630
> URL: https://issues.apache.org/jira/browse/HBASE-16630
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 2.0.0, 1.1.6, 1.3.1, 1.2.3
>Reporter: deepankar
>Assignee: deepankar
>Priority: Critical
> Attachments: 16630-v2-suggest.patch, 16630-v3-suggest.patch, 
> HBASE-16630.patch, HBASE-16630-v2.patch, HBASE-16630-v3.patch
>
>
> As we are running bucket cache for a long time in our system, we are 
> observing cases where some nodes after some time does not fully utilize the 
> bucket cache, in some cases it is even worse in the sense they get stuck at a 
> value < 0.25 % of the bucket cache (DEFAULT_MEMORY_FACTOR as all our tables 
> are configured in-memory for simplicity sake).
> We took a heap dump and analyzed what is happening and saw that is classic 
> case of fragmentation, current implementation of BucketCache (mainly 
> BucketAllocator) relies on the logic that fullyFreeBuckets are available for 
> switching/adjusting cache usage between different bucketSizes . But once a 
> compaction / bulkload happens and the blocks are evicted from a bucket size , 
> these are usually evicted from random places of the buckets of a bucketSize 
> and thus locking the number of buckets associated with a bucketSize and in 
> the worst case of the fragmentation we have seen some bucketSizes with 
> occupancy ratio of <  10 % But they dont have any completelyFreeBuckets to 
> share with the other bucketSize. 
> Currently the existing eviction logic helps in the cases where cache used is 
> more the MEMORY_FACTOR or MULTI_FACTOR and once those evictions are also 
> done, the eviction (freeSpace function) will not evict anything and the cache 
> utilization will be stuck at that value without any allocations for other 
> required sizes.
> The fix for this we came up with is simple that we do deFragmentation ( 
> compaction) of the bucketSize and thus increasing the occupancy ratio and 
> also freeing up the buckets to be fullyFree, this logic itself is not 
> complicated as the bucketAllocator takes care of packing the blocks in the 
> buckets, we need evict and re-allocate the blocks for all the BucketSizes 
> that dont fit the criteria.
> I am attaching an initial patch just to give an idea of what we are thinking 
> and I'll improve it based on the comments from the community.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17688) MultiRowRangeFilter not working correctly if given same start and stop RowKey

2017-02-23 Thread Ravi Ahuj (JIRA)
Ravi Ahuj created HBASE-17688:
-

 Summary: MultiRowRangeFilter not working correctly if given same 
start and stop RowKey
 Key: HBASE-17688
 URL: https://issues.apache.org/jira/browse/HBASE-17688
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.2
Reporter: Ravi Ahuj
Priority: Minor




try (final Connection conn = ConnectionFactory.createConnection(conf);
 final Table scanTable = conn.getTable(table)){
ArrayList rowRangesList = new 
ArrayList<>();  
 
  String startRowkey="abc";
  String stopRowkey="abc";
rowRangesList.add(new MultiRowRangeFilter.RowRange(startRowkey, 
true, stopRowkey, true));
Scan scan = new Scan();
scan.setFilter(new MultiRowRangeFilter(rowRangesList));

 ResultScanner scanner=scanTable.getScanner(scan);
 
  for (Result result : scanner) {
  String rowkey=new String(result.getRow());
 System.out.println(rowkey);
 
  } 
}

In Hbase API of Java, we want to do multiple scans in the table using 
MultiRowRangeFilter.
When we give multiple filters of startRowKey and stopRowKey, it is not working 
Properly with same StartRowKey and StopRowkey.
Ideally, it should give only one Row with that Rowkey, but instead it is giving 
all the rows starting from that Rowkey in that Hbase Table



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17682) Region stuck in merging_new state indefinitely

2017-02-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881982#comment-15881982
 ] 

Hudson commented on HBASE-17682:


FAILURE: Integrated in Jenkins build HBase-1.2-JDK8 #101 (See 
[https://builds.apache.org/job/HBase-1.2-JDK8/101/])
HBASE-17682 Region stuck in merging_new state indefinitely (apurtell: rev 
4caed356f15fec6ace0f5e7641e7526d1e01f7bb)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java


> Region stuck in merging_new state indefinitely
> --
>
> Key: HBASE-17682
> URL: https://issues.apache.org/jira/browse/HBASE-17682
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.5, 1.1.10
>
> Attachments: HBASE-17682.branch-1.3.001.patch, 
> HBASE-17682.master.001.patch
>
>
> Ran into issue while tinkering around with a chaos monkey that did splits, 
> merges and kills exclusively, which resulted in regions getting stuck in 
> transition in merging new state indefinitely which i think happens when the 
> rs is killed during the merge but before the ponr, in which case the new 
> regions state in master is merging new. When the rs dies at this point the 
> master executes RegionStates.serverOffline() for the rs which does
> {code}
> for (RegionState state : regionsInTransition.values()) {
> HRegionInfo hri = state.getRegion();
> if (assignedRegions.contains(hri)) {
>   // Region is open on this region server, but in transition.
>   // This region must be moving away from this server, or 
> splitting/merging.
>   // SSH will handle it, either skip assigning, or re-assign.
>   LOG.info("Transitioning " + state + " will be handled by 
> ServerCrashProcedure for " + sn);
> } else if (sn.equals(state.getServerName())) {
>   // Region is in transition on this region server, and this
>   // region is not open on this server. So the region must be
>   // moving to this server from another one (i.e. opening or
>   // pending open on this server, was open on another one.
>   // Offline state is also kind of pending open if the region is in
>   // transition. The region could be in failed_close state too if we 
> have
>   // tried several times to open it while this region server is not 
> reachable)
>   if (state.isPendingOpenOrOpening() || state.isFailedClose() || 
> state.isOffline()) {
> LOG.info("Found region in " + state +
>   " to be reassigned by ServerCrashProcedure for " + sn);
> rits.add(hri);
>   } else if(state.isSplittingNew()) {
> regionsToCleanIfNoMetaEntry.add(state.getRegion());
>   } else {
> LOG.warn("THIS SHOULD NOT HAPPEN: unexpected " + state);
>   }
> }
>   }
> {code}
> We donot handle merging new here and end up with "THIS SHOULD NOT HAPPEN: 
> unexpected ...". Post this we have the new region which does not have any 
> data stuck which leads to the balancer not running.
> I think we should handle mergingnew the same way as splittingnew. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17687) hive on hbase table and phoenix table can't be selected

2017-02-23 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881963#comment-15881963
 ] 

Ashish Singhi commented on HBASE-17687:
---

This issue seems to be a vendor specific. Please check with the vendor first.
Thanks

> hive on hbase table and phoenix table can't  be selected
> 
>
> Key: HBASE-17687
> URL: https://issues.apache.org/jira/browse/HBASE-17687
> Project: HBase
>  Issue Type: Improvement
>  Components: hbase
>Affects Versions: 1.0.2
> Environment: hadoop 2.7.2
> hbase 1.0.2
> phoenix 4.4
> hive 1.3
> all above are based on huawei FusionInsight HD(FusionInsight 
> V100R002C60U10SPC001)
>Reporter: yunliangchen
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> First , I created a table on phoenix, as this:
> ---
> DROP TABLE IF EXISTS bidwd_test01 CASCADE;
> CREATE TABLE IF NOT EXISTS bidwd_test01(
>rk VARCHAR,
>c1 integer,
>c2 VARCHAR,
>c3 VARCHAR,
>c4 VARCHAR
>constraint bidwd_test01_pk primary key(rk)
> )
> COMPRESSION='SNAPPY'
> ;
> ---
> And then , I upserted two rows into the table:
> ---
> upsert into bidwd_test01 values('001',1,'zhangsan','20170217','2017-02-17 
> 12:34:22');
> upsert into bidwd_test01 values('002',2,'lisi','20170216','2017-02-16 
> 12:34:22');
> ---
> At last , I scaned the table like this:
> ---
> select * from bidwd_test01;
> ---
> It's OK by now, but, I want to create a hive on hbase table ,that mapping to 
> the phoenix table , the script likes this:
> ---
> USE BIDWD;
> DROP TABLE test01;
> CREATE EXTERNAL TABLE test01
> (
>  rk string,
>  id int,
>  name string,
>  datekey string,
>  time_stamp string
> )
> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'  
> WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,0:C1,0:C2,0:C3,0:C4")  
> TBLPROPERTIES ("hbase.table.name" = "BIDWD_TEST01");
> ---
> So,I also try to insert some data into the table,and scan this table:
> ---
> set hive.execution.engine=mr;
> insert into test01 values('003',3,'lisi2','20170215','2017-02-15 12:34:22');
> select * from test01;
> ---
> But,there are some problems like this:
> +++--+-+--+--+
> | test01.rk  | test01.id  | test01.name  | test01.datekey  |  
> test01.time_stamp   |
> +++--+-+--+--+
> | 001| NULL   | zhangsan | 20170217| 2017-02-17 
> 12:34:22  |
> | 002| NULL   | lisi | 20170216| 2017-02-16 
> 12:34:22  |
> | 003| 3  | lisi2| 20170215| 2017-02-15 
> 12:34:22  |
> +++--+-+--+--+
> the column "id" 's value was null,only the last row is ok.
> but,when I scan data in the phoenix ,there are some errors like this:
> Error: ERROR 201 (22000): Illegal data. Expected length of at least 115 
> bytes, but had 31 (state=22000,code=201)
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
> least 115 bytes, but had 31
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:389)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>   at 
> org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:211)
>   at 
> org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:113)
>   at 
> org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:69)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.getString(PhoenixResultSet.java:591)
>   at sqlline.Rows$Row.(Rows.java:183)
>   at sqlline.BufferedRows.(BufferedRows.java:38)
>   at sqlline.SqlLine.print(SqlLine.java:1546)
>   at sqlline.Commands.execute(Commands.java:833)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:702)
>   at sqlline.SqlLine.begin(SqlLine.java:575)
>   at 

[jira] [Updated] (HBASE-17460) enable_table_replication can not perform cyclic replication of a table

2017-02-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17460:
---
Attachment: 17460-addendum.v2.txt

> enable_table_replication can not perform cyclic replication of a table
> --
>
> Key: HBASE-17460
> URL: https://issues.apache.org/jira/browse/HBASE-17460
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: NITIN VERMA
>Assignee: NITIN VERMA
>  Labels: incompatibleChange, replication
> Fix For: 2.0.0
>
> Attachments: 17460-addendum.txt, 17460-addendum.v2.txt, 
> 17460.branch-1.v3.txt, 17460.v5.txt, HBASE-17460.patch, HBASE-17460_v2.patch, 
> HBASE-17460_v3.patch, HBASE-17460_v4.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> The enable_table_replication operation is broken for cyclic replication of 
> HBase table as we compare all the properties of column families (including 
> REPLICATION_SCOPE). 
> Below is exactly what happens:
> 1.  Running "enable_table_replication 'table1'  " opeartion on first cluster 
> will set the REPLICATION_SCOPE of all column families to peer id '1'. This 
> will also create a table on second cluster where REPLICATION_SCOPE is still 
> set to peer id '0'.
> 2. Now when we run "enable_table_replication 'table1'" on second cluster, we 
> compare all the properties of table (including REPLICATION_SCOPE_, which 
> obviously is different now. 
> I am proposing a fix for this issue where we should avoid comparing 
> REPLICATION_SCOPE inside HColumnDescriotor::compareTo() method, especially 
> when replication is not already enabled on the desired table.
> I have made that change and it is working. I will submit the patch soon.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17069) RegionServer writes invalid META entries for split daughters in some circumstances

2017-02-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881904#comment-15881904
 ] 

Hudson commented on HBASE-17069:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK7 #113 (See 
[https://builds.apache.org/job/HBase-1.3-JDK7/113/])
Amend HBASE-17069 RegionServer writes invalid META entries in some (apurtell: 
rev 9d9decb4d9179cc64e3a5d9753376ed69c517a0a)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java


> RegionServer writes invalid META entries for split daughters in some 
> circumstances
> --
>
> Key: HBASE-17069
> URL: https://issues.apache.org/jira/browse/HBASE-17069
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.2.4
>Reporter: Andrew Purtell
>Assignee: Abhishek Singh Chouhan
>Priority: Blocker
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.5
>
> Attachments: daughter_1_d55ef81c2f8299abbddfce0445067830.log, 
> daughter_2_08629d59564726da2497f70451aafcdb.log, 
> HBASE-17069-addendum.branch-1.3.001.patch, HBASE-17069.branch-1.3.001.patch, 
> HBASE-17069.branch-1.3.002.patch, HBASE-17069.master.001.patch, logs.tar.gz, 
> parent-393d2bfd8b1c52ce08540306659624f2.log
>
>
> I have been seeing frequent ITBLL failures testing various versions of 1.2.x. 
> Over the lifetime of 1.2.x the following issues have been fixed:
> - HBASE-15315 (Remove always set super user call as high priority)
> - HBASE-16093 (Fix splits failed before creating daughter regions leave meta 
> inconsistent)
> And this one is pending:
> - HBASE-17044 (Fix merge failed before creating merged region leaves meta 
> inconsistent)
> I can apply all of the above to branch-1.2 and still see this failure: 
> *The life of stillborn region d55ef81c2f8299abbddfce0445067830*
> *Master sees SPLITTING_NEW*
> {noformat}
> 2016-11-08 04:23:21,186 INFO  [AM.ZK.Worker-pool2-t82] master.RegionStates: 
> Transition null to {d55ef81c2f8299abbddfce0445067830 state=SPLITTING_NEW, 
> ts=1478579001186, server=node-3.cluster,16020,1478578389506}
> {noformat}
> *The RegionServer creates it*
> {noformat}
> 2016-11-08 04:23:26,035 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for GomnU: blockCache=LruBlockCache{blockCount=34, 
> currentSize=14996112, freeSize=12823716208, maxSize=12838712320, 
> heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,038 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for big: blockCache=LruBlockCache{blockCount=34, 
> currentSize=14996112, freeSize=12823716208, maxSize=12838712320, 
> heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,442 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for meta: blockCache=LruBlockCache{blockCount=63, 
> currentSize=17187656, freeSize=12821524664, maxSize=12838712320, 
> heapSize=17187656, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,713 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for nwmrW: blockCache=LruBlockCache{blockCount=96, 
> currentSize=19178440, freeSize=12819533880, maxSize=12838712320, 
> heapSize=19178440, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,715 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for piwbr: blockCache=LruBlockCache{blockCount=96, 
> currentSize=19178440, freeSize=12819533880, maxSize=12838712320, 
> heapSize=19178440, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, 

[jira] [Commented] (HBASE-17682) Region stuck in merging_new state indefinitely

2017-02-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881905#comment-15881905
 ] 

Hudson commented on HBASE-17682:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK7 #113 (See 
[https://builds.apache.org/job/HBase-1.3-JDK7/113/])
HBASE-17682 Region stuck in merging_new state indefinitely (apurtell: rev 
bd438aadc8cc88047547656194f4717ad8e89915)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java


> Region stuck in merging_new state indefinitely
> --
>
> Key: HBASE-17682
> URL: https://issues.apache.org/jira/browse/HBASE-17682
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.5, 1.1.10
>
> Attachments: HBASE-17682.branch-1.3.001.patch, 
> HBASE-17682.master.001.patch
>
>
> Ran into issue while tinkering around with a chaos monkey that did splits, 
> merges and kills exclusively, which resulted in regions getting stuck in 
> transition in merging new state indefinitely which i think happens when the 
> rs is killed during the merge but before the ponr, in which case the new 
> regions state in master is merging new. When the rs dies at this point the 
> master executes RegionStates.serverOffline() for the rs which does
> {code}
> for (RegionState state : regionsInTransition.values()) {
> HRegionInfo hri = state.getRegion();
> if (assignedRegions.contains(hri)) {
>   // Region is open on this region server, but in transition.
>   // This region must be moving away from this server, or 
> splitting/merging.
>   // SSH will handle it, either skip assigning, or re-assign.
>   LOG.info("Transitioning " + state + " will be handled by 
> ServerCrashProcedure for " + sn);
> } else if (sn.equals(state.getServerName())) {
>   // Region is in transition on this region server, and this
>   // region is not open on this server. So the region must be
>   // moving to this server from another one (i.e. opening or
>   // pending open on this server, was open on another one.
>   // Offline state is also kind of pending open if the region is in
>   // transition. The region could be in failed_close state too if we 
> have
>   // tried several times to open it while this region server is not 
> reachable)
>   if (state.isPendingOpenOrOpening() || state.isFailedClose() || 
> state.isOffline()) {
> LOG.info("Found region in " + state +
>   " to be reassigned by ServerCrashProcedure for " + sn);
> rits.add(hri);
>   } else if(state.isSplittingNew()) {
> regionsToCleanIfNoMetaEntry.add(state.getRegion());
>   } else {
> LOG.warn("THIS SHOULD NOT HAPPEN: unexpected " + state);
>   }
> }
>   }
> {code}
> We donot handle merging new here and end up with "THIS SHOULD NOT HAPPEN: 
> unexpected ...". Post this we have the new region which does not have any 
> data stuck which leads to the balancer not running.
> I think we should handle mergingnew the same way as splittingnew. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17687) hive on hbase table and phoenix table can't be selected

2017-02-23 Thread yunliangchen (JIRA)
yunliangchen created HBASE-17687:


 Summary: hive on hbase table and phoenix table can't  be selected
 Key: HBASE-17687
 URL: https://issues.apache.org/jira/browse/HBASE-17687
 Project: HBase
  Issue Type: Improvement
  Components: hbase
Affects Versions: 1.0.2
 Environment: hadoop 2.7.2
hbase 1.0.2
phoenix 4.4
hive 1.3
all above are based on huawei FusionInsight HD(FusionInsight 
V100R002C60U10SPC001)
Reporter: yunliangchen


First , I created a table on phoenix, as this:
---
DROP TABLE IF EXISTS bidwd_test01 CASCADE;
CREATE TABLE IF NOT EXISTS bidwd_test01(
   rk VARCHAR,
   c1 integer,
   c2 VARCHAR,
   c3 VARCHAR,
   c4 VARCHAR
   constraint bidwd_test01_pk primary key(rk)
)
COMPRESSION='SNAPPY'
;
---
And then , I upserted two rows into the table:
---
upsert into bidwd_test01 values('001',1,'zhangsan','20170217','2017-02-17 
12:34:22');
upsert into bidwd_test01 values('002',2,'lisi','20170216','2017-02-16 
12:34:22');
---
At last , I scaned the table like this:
---
select * from bidwd_test01;
---

It's OK by now, but, I want to create a hive on hbase table ,that mapping to 
the phoenix table , the script likes this:
---
USE BIDWD;
DROP TABLE test01;
CREATE EXTERNAL TABLE test01
(
 rk string,
 id int,
 name string,
 datekey string,
 time_stamp string
)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'  
WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,0:C1,0:C2,0:C3,0:C4")  
TBLPROPERTIES ("hbase.table.name" = "BIDWD_TEST01");
---

So,I also try to insert some data into the table,and scan this table:
---
set hive.execution.engine=mr;
insert into test01 values('003',3,'lisi2','20170215','2017-02-15 12:34:22');
select * from test01;
---

But,there are some problems like this:
+++--+-+--+--+
| test01.rk  | test01.id  | test01.name  | test01.datekey  |  test01.time_stamp 
  |
+++--+-+--+--+
| 001| NULL   | zhangsan | 20170217| 2017-02-17 
12:34:22  |
| 002| NULL   | lisi | 20170216| 2017-02-16 
12:34:22  |
| 003| 3  | lisi2| 20170215| 2017-02-15 
12:34:22  |
+++--+-+--+--+

the column "id" 's value was null,only the last row is ok.
but,when I scan data in the phoenix ,there are some errors like this:
Error: ERROR 201 (22000): Illegal data. Expected length of at least 115 bytes, 
but had 31 (state=22000,code=201)
java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
least 115 bytes, but had 31
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:389)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
at 
org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:211)
at 
org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:113)
at 
org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:69)
at 
org.apache.phoenix.jdbc.PhoenixResultSet.getString(PhoenixResultSet.java:591)
at sqlline.Rows$Row.(Rows.java:183)
at sqlline.BufferedRows.(BufferedRows.java:38)
at sqlline.SqlLine.print(SqlLine.java:1546)
at sqlline.Commands.execute(Commands.java:833)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:702)
at sqlline.SqlLine.begin(SqlLine.java:575)
at sqlline.SqlLine.start(SqlLine.java:292)
at sqlline.SqlLine.main(SqlLine.java:194)

So,I don't know why? How can I solve this problem?





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16991) Make the initialization of AsyncConnection asynchronous

2017-02-23 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-16991:
--
Attachment: HBASE-16991-v3.patch

Retry,

> Make the initialization of AsyncConnection asynchronous
> ---
>
> Key: HBASE-16991
> URL: https://issues.apache.org/jira/browse/HBASE-16991
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16991.patch, HBASE-16991-v1.patch, 
> HBASE-16991-v2.patch, HBASE-16991-v3.patch
>
>
> Now the ConnectionFactory.createAsyncConnection is still blocking. We should 
> make it return a CompletableFuture to make the async client fully 
> asynchronous.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16991) Make the initialization of AsyncConnection asynchronous

2017-02-23 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-16991:
--
Attachment: (was: HBASE-16991-v3.patch)

> Make the initialization of AsyncConnection asynchronous
> ---
>
> Key: HBASE-16991
> URL: https://issues.apache.org/jira/browse/HBASE-16991
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16991.patch, HBASE-16991-v1.patch, 
> HBASE-16991-v2.patch, HBASE-16991-v3.patch
>
>
> Now the ConnectionFactory.createAsyncConnection is still blocking. We should 
> make it return a CompletableFuture to make the async client fully 
> asynchronous.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16991) Make the initialization of AsyncConnection asynchronous

2017-02-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881878#comment-15881878
 ] 

Hadoop QA commented on HBASE-16991:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 1s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
59s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
1s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
35s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
43s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
7s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
24m 58s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 20s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 10s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 51s 
{color} | {color:green} hbase-endpoint in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
28s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 100m 52s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hbase.master.locking.TestLockManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12854359/HBASE-16991-v3.patch |
| JIRA Issue | HBASE-16991 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 970bf3955fe8 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / c90d484 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5820/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  

[jira] [Commented] (HBASE-17069) RegionServer writes invalid META entries for split daughters in some circumstances

2017-02-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881872#comment-15881872
 ] 

Hudson commented on HBASE-17069:


FAILURE: Integrated in Jenkins build HBase-1.2-IT #606 (See 
[https://builds.apache.org/job/HBase-1.2-IT/606/])
Amend HBASE-17069 RegionServer writes invalid META entries in some (apurtell: 
rev fe00b59a3f11a0fffad271934d56ad8f733ca86b)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java


> RegionServer writes invalid META entries for split daughters in some 
> circumstances
> --
>
> Key: HBASE-17069
> URL: https://issues.apache.org/jira/browse/HBASE-17069
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.2.4
>Reporter: Andrew Purtell
>Assignee: Abhishek Singh Chouhan
>Priority: Blocker
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.5
>
> Attachments: daughter_1_d55ef81c2f8299abbddfce0445067830.log, 
> daughter_2_08629d59564726da2497f70451aafcdb.log, 
> HBASE-17069-addendum.branch-1.3.001.patch, HBASE-17069.branch-1.3.001.patch, 
> HBASE-17069.branch-1.3.002.patch, HBASE-17069.master.001.patch, logs.tar.gz, 
> parent-393d2bfd8b1c52ce08540306659624f2.log
>
>
> I have been seeing frequent ITBLL failures testing various versions of 1.2.x. 
> Over the lifetime of 1.2.x the following issues have been fixed:
> - HBASE-15315 (Remove always set super user call as high priority)
> - HBASE-16093 (Fix splits failed before creating daughter regions leave meta 
> inconsistent)
> And this one is pending:
> - HBASE-17044 (Fix merge failed before creating merged region leaves meta 
> inconsistent)
> I can apply all of the above to branch-1.2 and still see this failure: 
> *The life of stillborn region d55ef81c2f8299abbddfce0445067830*
> *Master sees SPLITTING_NEW*
> {noformat}
> 2016-11-08 04:23:21,186 INFO  [AM.ZK.Worker-pool2-t82] master.RegionStates: 
> Transition null to {d55ef81c2f8299abbddfce0445067830 state=SPLITTING_NEW, 
> ts=1478579001186, server=node-3.cluster,16020,1478578389506}
> {noformat}
> *The RegionServer creates it*
> {noformat}
> 2016-11-08 04:23:26,035 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for GomnU: blockCache=LruBlockCache{blockCount=34, 
> currentSize=14996112, freeSize=12823716208, maxSize=12838712320, 
> heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,038 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for big: blockCache=LruBlockCache{blockCount=34, 
> currentSize=14996112, freeSize=12823716208, maxSize=12838712320, 
> heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,442 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for meta: blockCache=LruBlockCache{blockCount=63, 
> currentSize=17187656, freeSize=12821524664, maxSize=12838712320, 
> heapSize=17187656, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,713 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for nwmrW: blockCache=LruBlockCache{blockCount=96, 
> currentSize=19178440, freeSize=12819533880, maxSize=12838712320, 
> heapSize=19178440, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,715 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for piwbr: blockCache=LruBlockCache{blockCount=96, 
> currentSize=19178440, freeSize=12819533880, maxSize=12838712320, 
> heapSize=19178440, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, 

[jira] [Commented] (HBASE-17682) Region stuck in merging_new state indefinitely

2017-02-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881873#comment-15881873
 ] 

Hudson commented on HBASE-17682:


FAILURE: Integrated in Jenkins build HBase-1.2-IT #606 (See 
[https://builds.apache.org/job/HBase-1.2-IT/606/])
HBASE-17682 Region stuck in merging_new state indefinitely (apurtell: rev 
4caed356f15fec6ace0f5e7641e7526d1e01f7bb)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java


> Region stuck in merging_new state indefinitely
> --
>
> Key: HBASE-17682
> URL: https://issues.apache.org/jira/browse/HBASE-17682
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.5, 1.1.10
>
> Attachments: HBASE-17682.branch-1.3.001.patch, 
> HBASE-17682.master.001.patch
>
>
> Ran into issue while tinkering around with a chaos monkey that did splits, 
> merges and kills exclusively, which resulted in regions getting stuck in 
> transition in merging new state indefinitely which i think happens when the 
> rs is killed during the merge but before the ponr, in which case the new 
> regions state in master is merging new. When the rs dies at this point the 
> master executes RegionStates.serverOffline() for the rs which does
> {code}
> for (RegionState state : regionsInTransition.values()) {
> HRegionInfo hri = state.getRegion();
> if (assignedRegions.contains(hri)) {
>   // Region is open on this region server, but in transition.
>   // This region must be moving away from this server, or 
> splitting/merging.
>   // SSH will handle it, either skip assigning, or re-assign.
>   LOG.info("Transitioning " + state + " will be handled by 
> ServerCrashProcedure for " + sn);
> } else if (sn.equals(state.getServerName())) {
>   // Region is in transition on this region server, and this
>   // region is not open on this server. So the region must be
>   // moving to this server from another one (i.e. opening or
>   // pending open on this server, was open on another one.
>   // Offline state is also kind of pending open if the region is in
>   // transition. The region could be in failed_close state too if we 
> have
>   // tried several times to open it while this region server is not 
> reachable)
>   if (state.isPendingOpenOrOpening() || state.isFailedClose() || 
> state.isOffline()) {
> LOG.info("Found region in " + state +
>   " to be reassigned by ServerCrashProcedure for " + sn);
> rits.add(hri);
>   } else if(state.isSplittingNew()) {
> regionsToCleanIfNoMetaEntry.add(state.getRegion());
>   } else {
> LOG.warn("THIS SHOULD NOT HAPPEN: unexpected " + state);
>   }
> }
>   }
> {code}
> We donot handle merging new here and end up with "THIS SHOULD NOT HAPPEN: 
> unexpected ...". Post this we have the new region which does not have any 
> data stuck which leads to the balancer not running.
> I think we should handle mergingnew the same way as splittingnew. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17584) Expose ScanMetrics with ResultScanner rather than Scan

2017-02-23 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881871#comment-15881871
 ] 

Duo Zhang commented on HBASE-17584:
---

No idea yet. Let me have a try first...

> Expose ScanMetrics with ResultScanner rather than Scan
> --
>
> Key: HBASE-17584
> URL: https://issues.apache.org/jira/browse/HBASE-17584
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, mapreduce, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Duo Zhang
> Fix For: 2.0.0, 1.4.0
>
>
> I think this have been discussed many times... It is a bad practice to 
> directly modify the Scan object passed in when calling getScanner. The reason 
> that we can not use a copy is we need to use the Scan object to expose scan 
> metrics. So we need to find another way to expose the metrics.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17069) RegionServer writes invalid META entries for split daughters in some circumstances

2017-02-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881867#comment-15881867
 ] 

Hudson commented on HBASE-17069:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #2560 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2560/])
Amend HBASE-17069 RegionServer writes invalid META entries in some (apurtell: 
rev 0d656b1394c1bfb70218d69efbd10f879d2f)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java


> RegionServer writes invalid META entries for split daughters in some 
> circumstances
> --
>
> Key: HBASE-17069
> URL: https://issues.apache.org/jira/browse/HBASE-17069
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.2.4
>Reporter: Andrew Purtell
>Assignee: Abhishek Singh Chouhan
>Priority: Blocker
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.5
>
> Attachments: daughter_1_d55ef81c2f8299abbddfce0445067830.log, 
> daughter_2_08629d59564726da2497f70451aafcdb.log, 
> HBASE-17069-addendum.branch-1.3.001.patch, HBASE-17069.branch-1.3.001.patch, 
> HBASE-17069.branch-1.3.002.patch, HBASE-17069.master.001.patch, logs.tar.gz, 
> parent-393d2bfd8b1c52ce08540306659624f2.log
>
>
> I have been seeing frequent ITBLL failures testing various versions of 1.2.x. 
> Over the lifetime of 1.2.x the following issues have been fixed:
> - HBASE-15315 (Remove always set super user call as high priority)
> - HBASE-16093 (Fix splits failed before creating daughter regions leave meta 
> inconsistent)
> And this one is pending:
> - HBASE-17044 (Fix merge failed before creating merged region leaves meta 
> inconsistent)
> I can apply all of the above to branch-1.2 and still see this failure: 
> *The life of stillborn region d55ef81c2f8299abbddfce0445067830*
> *Master sees SPLITTING_NEW*
> {noformat}
> 2016-11-08 04:23:21,186 INFO  [AM.ZK.Worker-pool2-t82] master.RegionStates: 
> Transition null to {d55ef81c2f8299abbddfce0445067830 state=SPLITTING_NEW, 
> ts=1478579001186, server=node-3.cluster,16020,1478578389506}
> {noformat}
> *The RegionServer creates it*
> {noformat}
> 2016-11-08 04:23:26,035 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for GomnU: blockCache=LruBlockCache{blockCount=34, 
> currentSize=14996112, freeSize=12823716208, maxSize=12838712320, 
> heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,038 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for big: blockCache=LruBlockCache{blockCount=34, 
> currentSize=14996112, freeSize=12823716208, maxSize=12838712320, 
> heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,442 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for meta: blockCache=LruBlockCache{blockCount=63, 
> currentSize=17187656, freeSize=12821524664, maxSize=12838712320, 
> heapSize=17187656, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,713 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for nwmrW: blockCache=LruBlockCache{blockCount=96, 
> currentSize=19178440, freeSize=12819533880, maxSize=12838712320, 
> heapSize=19178440, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,715 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for piwbr: blockCache=LruBlockCache{blockCount=96, 
> currentSize=19178440, freeSize=12819533880, maxSize=12838712320, 
> heapSize=19178440, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, 

[jira] [Commented] (HBASE-17682) Region stuck in merging_new state indefinitely

2017-02-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881868#comment-15881868
 ] 

Hudson commented on HBASE-17682:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #2560 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2560/])
HBASE-17682 Region stuck in merging_new state indefinitely (apurtell: rev 
c90d484f617a52b2312e44b23be186008908da4d)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java


> Region stuck in merging_new state indefinitely
> --
>
> Key: HBASE-17682
> URL: https://issues.apache.org/jira/browse/HBASE-17682
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.5, 1.1.10
>
> Attachments: HBASE-17682.branch-1.3.001.patch, 
> HBASE-17682.master.001.patch
>
>
> Ran into issue while tinkering around with a chaos monkey that did splits, 
> merges and kills exclusively, which resulted in regions getting stuck in 
> transition in merging new state indefinitely which i think happens when the 
> rs is killed during the merge but before the ponr, in which case the new 
> regions state in master is merging new. When the rs dies at this point the 
> master executes RegionStates.serverOffline() for the rs which does
> {code}
> for (RegionState state : regionsInTransition.values()) {
> HRegionInfo hri = state.getRegion();
> if (assignedRegions.contains(hri)) {
>   // Region is open on this region server, but in transition.
>   // This region must be moving away from this server, or 
> splitting/merging.
>   // SSH will handle it, either skip assigning, or re-assign.
>   LOG.info("Transitioning " + state + " will be handled by 
> ServerCrashProcedure for " + sn);
> } else if (sn.equals(state.getServerName())) {
>   // Region is in transition on this region server, and this
>   // region is not open on this server. So the region must be
>   // moving to this server from another one (i.e. opening or
>   // pending open on this server, was open on another one.
>   // Offline state is also kind of pending open if the region is in
>   // transition. The region could be in failed_close state too if we 
> have
>   // tried several times to open it while this region server is not 
> reachable)
>   if (state.isPendingOpenOrOpening() || state.isFailedClose() || 
> state.isOffline()) {
> LOG.info("Found region in " + state +
>   " to be reassigned by ServerCrashProcedure for " + sn);
> rits.add(hri);
>   } else if(state.isSplittingNew()) {
> regionsToCleanIfNoMetaEntry.add(state.getRegion());
>   } else {
> LOG.warn("THIS SHOULD NOT HAPPEN: unexpected " + state);
>   }
> }
>   }
> {code}
> We donot handle merging new here and end up with "THIS SHOULD NOT HAPPEN: 
> unexpected ...". Post this we have the new region which does not have any 
> data stuck which leads to the balancer not running.
> I think we should handle mergingnew the same way as splittingnew. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17680) Run mini cluster through JNI in tests

2017-02-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881865#comment-15881865
 ] 

Ted Yu commented on HBASE-17680:


w.r.t. compiler_flags, on Mac, there is no $JAVA_HOME/include/linux directory.
While on docker VM, $JAVA_HOME/include/linux is needed for including jni.h


> Run mini cluster through JNI in tests
> -
>
> Key: HBASE-17680
> URL: https://issues.apache.org/jira/browse/HBASE-17680
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 17680.v1.txt, 17680.v3.txt, 17680.v8.txt
>
>
> Currently tests start local hbase cluster through hbase shell.
> There is less control over the configuration of the local cluster this way.
> This issue would replace hbase shell with JNI interface to mini cluster.
> We would have full control over the cluster behavior.
> Thanks to [~devaraj] who started this initiative.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17460) enable_table_replication can not perform cyclic replication of a table

2017-02-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881858#comment-15881858
 ] 

Hadoop QA commented on HBASE-17460:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 4s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
33s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
56s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
30m 33s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 18s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
8s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m 45s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12854361/17460-addendum.v2.txt 
|
| JIRA Issue | HBASE-17460 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 5f5d9da6bf58 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / c90d484 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5821/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5821/testReport/ |
| modules | C: hbase-client U: hbase-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5821/console |
| Powered by | Apache Yetus 0.3.0   

[jira] [Comment Edited] (HBASE-17584) Expose ScanMetrics with ResultScanner rather than Scan

2017-02-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881850#comment-15881850
 ] 

stack edited comment on HBASE-17584 at 2/24/17 3:13 AM:


That'd be good yeah, but the reset of scan metrics on read is broke?


was (Author: stack):
That'd be good yeah, but the reset of scan metrics is broke?

> Expose ScanMetrics with ResultScanner rather than Scan
> --
>
> Key: HBASE-17584
> URL: https://issues.apache.org/jira/browse/HBASE-17584
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, mapreduce, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Duo Zhang
> Fix For: 2.0.0, 1.4.0
>
>
> I think this have been discussed many times... It is a bad practice to 
> directly modify the Scan object passed in when calling getScanner. The reason 
> that we can not use a copy is we need to use the Scan object to expose scan 
> metrics. So we need to find another way to expose the metrics.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17584) Expose ScanMetrics with ResultScanner rather than Scan

2017-02-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881850#comment-15881850
 ] 

stack commented on HBASE-17584:
---

That'd be good yeah, but the reset of scan metrics is broke?

> Expose ScanMetrics with ResultScanner rather than Scan
> --
>
> Key: HBASE-17584
> URL: https://issues.apache.org/jira/browse/HBASE-17584
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, mapreduce, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Duo Zhang
> Fix For: 2.0.0, 1.4.0
>
>
> I think this have been discussed many times... It is a bad practice to 
> directly modify the Scan object passed in when calling getScanner. The reason 
> that we can not use a copy is we need to use the Scan object to expose scan 
> metrics. So we need to find another way to expose the metrics.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16991) Make the initialization of AsyncConnection asynchronous

2017-02-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881848#comment-15881848
 ] 

stack commented on HBASE-16991:
---

+1 Looks great.

We have HBASE-17008 and HBASE-17009 for making this all more palatable to use.

> Make the initialization of AsyncConnection asynchronous
> ---
>
> Key: HBASE-16991
> URL: https://issues.apache.org/jira/browse/HBASE-16991
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16991.patch, HBASE-16991-v1.patch, 
> HBASE-16991-v2.patch, HBASE-16991-v3.patch
>
>
> Now the ConnectionFactory.createAsyncConnection is still blocking. We should 
> make it return a CompletableFuture to make the async client fully 
> asynchronous.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17682) Region stuck in merging_new state indefinitely

2017-02-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881847#comment-15881847
 ] 

Hudson commented on HBASE-17682:


SUCCESS: Integrated in Jenkins build HBase-1.3-IT #839 (See 
[https://builds.apache.org/job/HBase-1.3-IT/839/])
HBASE-17682 Region stuck in merging_new state indefinitely (apurtell: rev 
4caed356f15fec6ace0f5e7641e7526d1e01f7bb)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java


> Region stuck in merging_new state indefinitely
> --
>
> Key: HBASE-17682
> URL: https://issues.apache.org/jira/browse/HBASE-17682
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.5, 1.1.10
>
> Attachments: HBASE-17682.branch-1.3.001.patch, 
> HBASE-17682.master.001.patch
>
>
> Ran into issue while tinkering around with a chaos monkey that did splits, 
> merges and kills exclusively, which resulted in regions getting stuck in 
> transition in merging new state indefinitely which i think happens when the 
> rs is killed during the merge but before the ponr, in which case the new 
> regions state in master is merging new. When the rs dies at this point the 
> master executes RegionStates.serverOffline() for the rs which does
> {code}
> for (RegionState state : regionsInTransition.values()) {
> HRegionInfo hri = state.getRegion();
> if (assignedRegions.contains(hri)) {
>   // Region is open on this region server, but in transition.
>   // This region must be moving away from this server, or 
> splitting/merging.
>   // SSH will handle it, either skip assigning, or re-assign.
>   LOG.info("Transitioning " + state + " will be handled by 
> ServerCrashProcedure for " + sn);
> } else if (sn.equals(state.getServerName())) {
>   // Region is in transition on this region server, and this
>   // region is not open on this server. So the region must be
>   // moving to this server from another one (i.e. opening or
>   // pending open on this server, was open on another one.
>   // Offline state is also kind of pending open if the region is in
>   // transition. The region could be in failed_close state too if we 
> have
>   // tried several times to open it while this region server is not 
> reachable)
>   if (state.isPendingOpenOrOpening() || state.isFailedClose() || 
> state.isOffline()) {
> LOG.info("Found region in " + state +
>   " to be reassigned by ServerCrashProcedure for " + sn);
> rits.add(hri);
>   } else if(state.isSplittingNew()) {
> regionsToCleanIfNoMetaEntry.add(state.getRegion());
>   } else {
> LOG.warn("THIS SHOULD NOT HAPPEN: unexpected " + state);
>   }
> }
>   }
> {code}
> We donot handle merging new here and end up with "THIS SHOULD NOT HAPPEN: 
> unexpected ...". Post this we have the new region which does not have any 
> data stuck which leads to the balancer not running.
> I think we should handle mergingnew the same way as splittingnew. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17009) Revisiting the removement of managed connection and connection caching

2017-02-23 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-17009:
--
Fix Version/s: 2.0.0

> Revisiting the removement of managed connection and connection caching
> --
>
> Key: HBASE-17009
> URL: https://issues.apache.org/jira/browse/HBASE-17009
> Project: HBase
>  Issue Type: Task
>  Components: Operability
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Critical
> Fix For: 2.0.0
>
>
> In HBASE-13197 we have done lots of good cleanups for Connection API, but 
> among which HBASE-13252 dropped the feature of managed connection and 
> connection caching, and this JIRA propose to have a revisit on this decision 
> for below reasons.
> Assume we have a long running process with multiple threads accessing HBase 
> (a common case for streaming application), let's see what happens previously 
> and now.
> Previously:
> User could create an HTable instance whenever they want w/o worrying about 
> the underlying connections because HBase client will mange it automatically, 
> say no matter how many threads there will be only one Connection instance
> {code}
>   @Deprecated
>   public HTable(Configuration conf, final TableName tableName)
>   throws IOException {
> ...
> this.connection = ConnectionManager.getConnectionInternal(conf);
> ...
>   }
>   static ClusterConnection getConnectionInternal(final Configuration conf)
> throws IOException {
> HConnectionKey connectionKey = new HConnectionKey(conf);
> synchronized (CONNECTION_INSTANCES) {
>   HConnectionImplementation connection = 
> CONNECTION_INSTANCES.get(connectionKey);
>   if (connection == null) {
> connection = (HConnectionImplementation)createConnection(conf, true);
> CONNECTION_INSTANCES.put(connectionKey, connection);
>   } else if (connection.isClosed()) {
> ConnectionManager.deleteConnection(connectionKey, true);
> connection = (HConnectionImplementation)createConnection(conf, true);
> CONNECTION_INSTANCES.put(connectionKey, connection);
>   }
>   connection.incCount();
>   return connection;
> }
>   }
> {code}
> Now:
> User has to create the connection by themselves, using below codes like 
> indicated in our recommendations
> {code}
> Connection connection = ConnectionFactory.createConnection(conf);
> Table table = connection.getTable(tableName);
> {code}
> And they must make sure *only one* single connection created in one *process* 
> instead of creating HTable instance freely, or else there might be many 
> connections setup to zookeeper/RS with multiple threads. Also user might ask 
> "when I should close the connection I close?" and the answer is "make sure 
> don't close it until the *process* shutdown"
> So now we have much more things for user to "Make sure", but custom is 
> something hard to change. User used to create table instance in each thread 
> (according to which table to access per requested) so probably they will 
> still create connections everywhere, and then operators will have to crazily 
> resolve all kinds of problems...
> So I'm proposing to add back the managed connection and connection caching 
> support. IMHO it's something good and ever existed in our implementation, so 
> let's bring it back and save the workload for operators when they decided to 
> upgrade from 1.x to 2.x
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17584) Expose ScanMetrics with ResultScanner rather than Scan

2017-02-23 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881844#comment-15881844
 ] 

Duo Zhang commented on HBASE-17584:
---

I think at least we should have a release that exposes it in two ways? 

> Expose ScanMetrics with ResultScanner rather than Scan
> --
>
> Key: HBASE-17584
> URL: https://issues.apache.org/jira/browse/HBASE-17584
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, mapreduce, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Duo Zhang
> Fix For: 2.0.0, 1.4.0
>
>
> I think this have been discussed many times... It is a bad practice to 
> directly modify the Scan object passed in when calling getScanner. The reason 
> that we can not use a copy is we need to use the Scan object to expose scan 
> metrics. So we need to find another way to expose the metrics.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17008) Examples, Doc, and Helper Classes to make AsyncClient go down easy

2017-02-23 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-17008:
--
Priority: Critical  (was: Major)

> Examples, Doc, and Helper Classes to make AsyncClient go down easy
> --
>
> Key: HBASE-17008
> URL: https://issues.apache.org/jira/browse/HBASE-17008
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.0
>
>
> The parent issue is about delivering a new, async client. The new client 
> operates at a pretty low level. There will be questions on how best to use it.
> Some have come up already over in HBASE-16991. In particular, [~Apache9] and 
> [~carp84] talk about the tier they have to put on top of hbase because its 
> API is not user-friendly.
> This issue is about adding in the examples, docs., and helper classes we need 
> to make the new async client more palatable to mortals. See HBASE-16991 for 
> instance for an example of how to cache an AsyncConnection that an 
> application might make use of.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16630) Fragmentation in long running Bucket Cache

2017-02-23 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881840#comment-15881840
 ] 

Sean Busbey commented on HBASE-16630:
-

seems fine for branch-1.2, but doesn't seem worth blocking 1.2.5. I'll probably 
make a go at 1.2.5 RC this weekend, so there's time.

Would really like a test, but that'd be fine as a follow-on.

> Fragmentation in long running Bucket Cache
> --
>
> Key: HBASE-16630
> URL: https://issues.apache.org/jira/browse/HBASE-16630
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 2.0.0, 1.1.6, 1.3.1, 1.2.3
>Reporter: deepankar
>Assignee: deepankar
>Priority: Critical
> Attachments: 16630-v2-suggest.patch, 16630-v3-suggest.patch, 
> HBASE-16630.patch, HBASE-16630-v2.patch, HBASE-16630-v3.patch
>
>
> As we are running bucket cache for a long time in our system, we are 
> observing cases where some nodes after some time does not fully utilize the 
> bucket cache, in some cases it is even worse in the sense they get stuck at a 
> value < 0.25 % of the bucket cache (DEFAULT_MEMORY_FACTOR as all our tables 
> are configured in-memory for simplicity sake).
> We took a heap dump and analyzed what is happening and saw that is classic 
> case of fragmentation, current implementation of BucketCache (mainly 
> BucketAllocator) relies on the logic that fullyFreeBuckets are available for 
> switching/adjusting cache usage between different bucketSizes . But once a 
> compaction / bulkload happens and the blocks are evicted from a bucket size , 
> these are usually evicted from random places of the buckets of a bucketSize 
> and thus locking the number of buckets associated with a bucketSize and in 
> the worst case of the fragmentation we have seen some bucketSizes with 
> occupancy ratio of <  10 % But they dont have any completelyFreeBuckets to 
> share with the other bucketSize. 
> Currently the existing eviction logic helps in the cases where cache used is 
> more the MEMORY_FACTOR or MULTI_FACTOR and once those evictions are also 
> done, the eviction (freeSpace function) will not evict anything and the cache 
> utilization will be stuck at that value without any allocations for other 
> required sizes.
> The fix for this we came up with is simple that we do deFragmentation ( 
> compaction) of the bucketSize and thus increasing the occupancy ratio and 
> also freeing up the buckets to be fullyFree, this logic itself is not 
> complicated as the bucketAllocator takes care of packing the blocks in the 
> buckets, we need evict and re-allocate the blocks for all the BucketSizes 
> that dont fit the criteria.
> I am attaching an initial patch just to give an idea of what we are thinking 
> and I'll improve it based on the comments from the community.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15314) Allow more than one backing file in bucketcache

2017-02-23 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881836#comment-15881836
 ] 

chunhui shen commented on HBASE-15314:
--

bq. How would you handle the case of striping
Read the block from two files if crossed, just like the handling of 
ByteBufferIOEngine which would read block from multiple ByteBuffer.
This logic is implemented  by FileIOEngine#accessFile from the attachment 
'FileIOEngine.java'. 

Thanks

> Allow more than one backing file in bucketcache
> ---
>
> Key: HBASE-15314
> URL: https://issues.apache.org/jira/browse/HBASE-15314
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: Aaron Tokhy
> Attachments: FileIOEngine.java, HBASE-15314.master.001.patch, 
> HBASE-15314.master.001.patch, HBASE-15314.patch, HBASE-15314-v2.patch, 
> HBASE-15314-v3.patch
>
>
> Allow bucketcache use more than just one backing file: e.g. chassis has more 
> than one SSD in it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17584) Expose ScanMetrics with ResultScanner rather than Scan

2017-02-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881838#comment-15881838
 ] 

stack commented on HBASE-17584:
---

Do we have to? In 2.0 change the behavior?

> Expose ScanMetrics with ResultScanner rather than Scan
> --
>
> Key: HBASE-17584
> URL: https://issues.apache.org/jira/browse/HBASE-17584
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, mapreduce, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Duo Zhang
> Fix For: 2.0.0, 1.4.0
>
>
> I think this have been discussed many times... It is a bad practice to 
> directly modify the Scan object passed in when calling getScanner. The reason 
> that we can not use a copy is we need to use the Scan object to expose scan 
> metrics. So we need to find another way to expose the metrics.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17069) RegionServer writes invalid META entries for split daughters in some circumstances

2017-02-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881822#comment-15881822
 ] 

Hudson commented on HBASE-17069:


SUCCESS: Integrated in Jenkins build HBase-1.2-JDK7 #107 (See 
[https://builds.apache.org/job/HBase-1.2-JDK7/107/])
Amend HBASE-17069 RegionServer writes invalid META entries in some (apurtell: 
rev fe00b59a3f11a0fffad271934d56ad8f733ca86b)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java


> RegionServer writes invalid META entries for split daughters in some 
> circumstances
> --
>
> Key: HBASE-17069
> URL: https://issues.apache.org/jira/browse/HBASE-17069
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.2.4
>Reporter: Andrew Purtell
>Assignee: Abhishek Singh Chouhan
>Priority: Blocker
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.5
>
> Attachments: daughter_1_d55ef81c2f8299abbddfce0445067830.log, 
> daughter_2_08629d59564726da2497f70451aafcdb.log, 
> HBASE-17069-addendum.branch-1.3.001.patch, HBASE-17069.branch-1.3.001.patch, 
> HBASE-17069.branch-1.3.002.patch, HBASE-17069.master.001.patch, logs.tar.gz, 
> parent-393d2bfd8b1c52ce08540306659624f2.log
>
>
> I have been seeing frequent ITBLL failures testing various versions of 1.2.x. 
> Over the lifetime of 1.2.x the following issues have been fixed:
> - HBASE-15315 (Remove always set super user call as high priority)
> - HBASE-16093 (Fix splits failed before creating daughter regions leave meta 
> inconsistent)
> And this one is pending:
> - HBASE-17044 (Fix merge failed before creating merged region leaves meta 
> inconsistent)
> I can apply all of the above to branch-1.2 and still see this failure: 
> *The life of stillborn region d55ef81c2f8299abbddfce0445067830*
> *Master sees SPLITTING_NEW*
> {noformat}
> 2016-11-08 04:23:21,186 INFO  [AM.ZK.Worker-pool2-t82] master.RegionStates: 
> Transition null to {d55ef81c2f8299abbddfce0445067830 state=SPLITTING_NEW, 
> ts=1478579001186, server=node-3.cluster,16020,1478578389506}
> {noformat}
> *The RegionServer creates it*
> {noformat}
> 2016-11-08 04:23:26,035 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for GomnU: blockCache=LruBlockCache{blockCount=34, 
> currentSize=14996112, freeSize=12823716208, maxSize=12838712320, 
> heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,038 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for big: blockCache=LruBlockCache{blockCount=34, 
> currentSize=14996112, freeSize=12823716208, maxSize=12838712320, 
> heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,442 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for meta: blockCache=LruBlockCache{blockCount=63, 
> currentSize=17187656, freeSize=12821524664, maxSize=12838712320, 
> heapSize=17187656, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,713 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for nwmrW: blockCache=LruBlockCache{blockCount=96, 
> currentSize=19178440, freeSize=12819533880, maxSize=12838712320, 
> heapSize=19178440, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,715 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for piwbr: blockCache=LruBlockCache{blockCount=96, 
> currentSize=19178440, freeSize=12819533880, maxSize=12838712320, 
> heapSize=19178440, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, 

[jira] [Commented] (HBASE-17682) Region stuck in merging_new state indefinitely

2017-02-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881824#comment-15881824
 ] 

Hudson commented on HBASE-17682:


SUCCESS: Integrated in Jenkins build HBase-1.2-JDK7 #107 (See 
[https://builds.apache.org/job/HBase-1.2-JDK7/107/])
HBASE-17682 Region stuck in merging_new state indefinitely (apurtell: rev 
4caed356f15fec6ace0f5e7641e7526d1e01f7bb)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java


> Region stuck in merging_new state indefinitely
> --
>
> Key: HBASE-17682
> URL: https://issues.apache.org/jira/browse/HBASE-17682
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.5, 1.1.10
>
> Attachments: HBASE-17682.branch-1.3.001.patch, 
> HBASE-17682.master.001.patch
>
>
> Ran into issue while tinkering around with a chaos monkey that did splits, 
> merges and kills exclusively, which resulted in regions getting stuck in 
> transition in merging new state indefinitely which i think happens when the 
> rs is killed during the merge but before the ponr, in which case the new 
> regions state in master is merging new. When the rs dies at this point the 
> master executes RegionStates.serverOffline() for the rs which does
> {code}
> for (RegionState state : regionsInTransition.values()) {
> HRegionInfo hri = state.getRegion();
> if (assignedRegions.contains(hri)) {
>   // Region is open on this region server, but in transition.
>   // This region must be moving away from this server, or 
> splitting/merging.
>   // SSH will handle it, either skip assigning, or re-assign.
>   LOG.info("Transitioning " + state + " will be handled by 
> ServerCrashProcedure for " + sn);
> } else if (sn.equals(state.getServerName())) {
>   // Region is in transition on this region server, and this
>   // region is not open on this server. So the region must be
>   // moving to this server from another one (i.e. opening or
>   // pending open on this server, was open on another one.
>   // Offline state is also kind of pending open if the region is in
>   // transition. The region could be in failed_close state too if we 
> have
>   // tried several times to open it while this region server is not 
> reachable)
>   if (state.isPendingOpenOrOpening() || state.isFailedClose() || 
> state.isOffline()) {
> LOG.info("Found region in " + state +
>   " to be reassigned by ServerCrashProcedure for " + sn);
> rits.add(hri);
>   } else if(state.isSplittingNew()) {
> regionsToCleanIfNoMetaEntry.add(state.getRegion());
>   } else {
> LOG.warn("THIS SHOULD NOT HAPPEN: unexpected " + state);
>   }
> }
>   }
> {code}
> We donot handle merging new here and end up with "THIS SHOULD NOT HAPPEN: 
> unexpected ...". Post this we have the new region which does not have any 
> data stuck which leads to the balancer not running.
> I think we should handle mergingnew the same way as splittingnew. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16630) Fragmentation in long running Bucket Cache

2017-02-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881825#comment-15881825
 ] 

stack commented on HBASE-16630:
---

+1 Go for it.

Should it go into hbase-1.2.5 [~anoop.hbase]? [~busbey] FYI

> Fragmentation in long running Bucket Cache
> --
>
> Key: HBASE-16630
> URL: https://issues.apache.org/jira/browse/HBASE-16630
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 2.0.0, 1.1.6, 1.3.1, 1.2.3
>Reporter: deepankar
>Assignee: deepankar
>Priority: Critical
> Attachments: 16630-v2-suggest.patch, 16630-v3-suggest.patch, 
> HBASE-16630.patch, HBASE-16630-v2.patch, HBASE-16630-v3.patch
>
>
> As we are running bucket cache for a long time in our system, we are 
> observing cases where some nodes after some time does not fully utilize the 
> bucket cache, in some cases it is even worse in the sense they get stuck at a 
> value < 0.25 % of the bucket cache (DEFAULT_MEMORY_FACTOR as all our tables 
> are configured in-memory for simplicity sake).
> We took a heap dump and analyzed what is happening and saw that is classic 
> case of fragmentation, current implementation of BucketCache (mainly 
> BucketAllocator) relies on the logic that fullyFreeBuckets are available for 
> switching/adjusting cache usage between different bucketSizes . But once a 
> compaction / bulkload happens and the blocks are evicted from a bucket size , 
> these are usually evicted from random places of the buckets of a bucketSize 
> and thus locking the number of buckets associated with a bucketSize and in 
> the worst case of the fragmentation we have seen some bucketSizes with 
> occupancy ratio of <  10 % But they dont have any completelyFreeBuckets to 
> share with the other bucketSize. 
> Currently the existing eviction logic helps in the cases where cache used is 
> more the MEMORY_FACTOR or MULTI_FACTOR and once those evictions are also 
> done, the eviction (freeSpace function) will not evict anything and the cache 
> utilization will be stuck at that value without any allocations for other 
> required sizes.
> The fix for this we came up with is simple that we do deFragmentation ( 
> compaction) of the bucketSize and thus increasing the occupancy ratio and 
> also freeing up the buckets to be fullyFree, this logic itself is not 
> complicated as the bucketAllocator takes care of packing the blocks in the 
> buckets, we need evict and re-allocate the blocks for all the BucketSizes 
> that dont fit the criteria.
> I am attaching an initial patch just to give an idea of what we are thinking 
> and I'll improve it based on the comments from the community.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17338) Treat Cell data size under global memstore heap size only when that Cell can not be copied to MSLAB

2017-02-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881820#comment-15881820
 ] 

stack commented on HBASE-17338:
---

+1 

Add more comment on commit to MemstoreSize about diff between heap and data 
size (can be a version of comment that is later in RegionServerAccounting 
class...)

Glad of the simplification. Good stuff @anoop sam john


> Treat Cell data size under global memstore heap size only when that Cell can 
> not be copied to MSLAB
> ---
>
> Key: HBASE-17338
> URL: https://issues.apache.org/jira/browse/HBASE-17338
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-17338.patch, HBASE-17338_V2.patch, 
> HBASE-17338_V2.patch, HBASE-17338_V4.patch
>
>
> We have only data size and heap overhead being tracked globally.  Off heap 
> memstore works with off heap backed MSLAB pool.  But a cell, when added to 
> memstore, not always getting copied to MSLAB.  Append/Increment ops doing an 
> upsert, dont use MSLAB.  Also based on the Cell size, we sometimes avoid 
> MSLAB copy.  But now we track these cell data size also under the global 
> memstore data size which indicated off heap size in case of off heap 
> memstore.  For global checks for flushes (against lower/upper watermark 
> levels), we check this size against max off heap memstore size.  We do check 
> heap overhead against global heap memstore size (Defaults to 40% of xmx)  But 
> for such cells the data size also should be accounted under the heap overhead.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17460) enable_table_replication can not perform cyclic replication of a table

2017-02-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17460:
---
Attachment: 17460-addendum.v2.txt

Corrected the condition pointed out above.

result holds the return from compareTo() so it should be int.
The return value has been changed to boolean.

> enable_table_replication can not perform cyclic replication of a table
> --
>
> Key: HBASE-17460
> URL: https://issues.apache.org/jira/browse/HBASE-17460
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: NITIN VERMA
>Assignee: NITIN VERMA
>  Labels: incompatibleChange, replication
> Fix For: 2.0.0
>
> Attachments: 17460-addendum.txt, 17460-addendum.v2.txt, 
> 17460.branch-1.v3.txt, 17460.v5.txt, HBASE-17460.patch, HBASE-17460_v2.patch, 
> HBASE-17460_v3.patch, HBASE-17460_v4.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> The enable_table_replication operation is broken for cyclic replication of 
> HBase table as we compare all the properties of column families (including 
> REPLICATION_SCOPE). 
> Below is exactly what happens:
> 1.  Running "enable_table_replication 'table1'  " opeartion on first cluster 
> will set the REPLICATION_SCOPE of all column families to peer id '1'. This 
> will also create a table on second cluster where REPLICATION_SCOPE is still 
> set to peer id '0'.
> 2. Now when we run "enable_table_replication 'table1'" on second cluster, we 
> compare all the properties of table (including REPLICATION_SCOPE_, which 
> obviously is different now. 
> I am proposing a fix for this issue where we should avoid comparing 
> REPLICATION_SCOPE inside HColumnDescriotor::compareTo() method, especially 
> when replication is not already enabled on the desired table.
> I have made that change and it is working. I will submit the patch soon.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17460) enable_table_replication can not perform cyclic replication of a table

2017-02-23 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881785#comment-15881785
 ] 

Enis Soztutar commented on HBASE-17460:
---

Thanks. 
This is not correct: 
{code}
+  if (remoteHCDIter.hasNext() && localHCDIter.hasNext()) {
{code}
Should be {{||}}. Or just check their sizes before the iteration. Also please 
change the variable {{int result}} to be a boolean.

> enable_table_replication can not perform cyclic replication of a table
> --
>
> Key: HBASE-17460
> URL: https://issues.apache.org/jira/browse/HBASE-17460
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: NITIN VERMA
>Assignee: NITIN VERMA
>  Labels: incompatibleChange, replication
> Fix For: 2.0.0
>
> Attachments: 17460-addendum.txt, 17460.branch-1.v3.txt, 17460.v5.txt, 
> HBASE-17460.patch, HBASE-17460_v2.patch, HBASE-17460_v3.patch, 
> HBASE-17460_v4.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> The enable_table_replication operation is broken for cyclic replication of 
> HBase table as we compare all the properties of column families (including 
> REPLICATION_SCOPE). 
> Below is exactly what happens:
> 1.  Running "enable_table_replication 'table1'  " opeartion on first cluster 
> will set the REPLICATION_SCOPE of all column families to peer id '1'. This 
> will also create a table on second cluster where REPLICATION_SCOPE is still 
> set to peer id '0'.
> 2. Now when we run "enable_table_replication 'table1'" on second cluster, we 
> compare all the properties of table (including REPLICATION_SCOPE_, which 
> obviously is different now. 
> I am proposing a fix for this issue where we should avoid comparing 
> REPLICATION_SCOPE inside HColumnDescriotor::compareTo() method, especially 
> when replication is not already enabled on the desired table.
> I have made that change and it is working. I will submit the patch soon.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16991) Make the initialization of AsyncConnection asynchronous

2017-02-23 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-16991:
--
Attachment: HBASE-16991-v3.patch

Fix TestZKAsyncRegistry.

> Make the initialization of AsyncConnection asynchronous
> ---
>
> Key: HBASE-16991
> URL: https://issues.apache.org/jira/browse/HBASE-16991
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16991.patch, HBASE-16991-v1.patch, 
> HBASE-16991-v2.patch, HBASE-16991-v3.patch
>
>
> Now the ConnectionFactory.createAsyncConnection is still blocking. We should 
> make it return a CompletableFuture to make the async client fully 
> asynchronous.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17069) RegionServer writes invalid META entries for split daughters in some circumstances

2017-02-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881779#comment-15881779
 ] 

Hudson commented on HBASE-17069:


FAILURE: Integrated in Jenkins build HBase-1.4 #644 (See 
[https://builds.apache.org/job/HBase-1.4/644/])
Amend HBASE-17069 RegionServer writes invalid META entries in some (apurtell: 
rev 2b6e9b3a3af5901f2eab5e38b591c27c33887698)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java


> RegionServer writes invalid META entries for split daughters in some 
> circumstances
> --
>
> Key: HBASE-17069
> URL: https://issues.apache.org/jira/browse/HBASE-17069
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.2.4
>Reporter: Andrew Purtell
>Assignee: Abhishek Singh Chouhan
>Priority: Blocker
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.5
>
> Attachments: daughter_1_d55ef81c2f8299abbddfce0445067830.log, 
> daughter_2_08629d59564726da2497f70451aafcdb.log, 
> HBASE-17069-addendum.branch-1.3.001.patch, HBASE-17069.branch-1.3.001.patch, 
> HBASE-17069.branch-1.3.002.patch, HBASE-17069.master.001.patch, logs.tar.gz, 
> parent-393d2bfd8b1c52ce08540306659624f2.log
>
>
> I have been seeing frequent ITBLL failures testing various versions of 1.2.x. 
> Over the lifetime of 1.2.x the following issues have been fixed:
> - HBASE-15315 (Remove always set super user call as high priority)
> - HBASE-16093 (Fix splits failed before creating daughter regions leave meta 
> inconsistent)
> And this one is pending:
> - HBASE-17044 (Fix merge failed before creating merged region leaves meta 
> inconsistent)
> I can apply all of the above to branch-1.2 and still see this failure: 
> *The life of stillborn region d55ef81c2f8299abbddfce0445067830*
> *Master sees SPLITTING_NEW*
> {noformat}
> 2016-11-08 04:23:21,186 INFO  [AM.ZK.Worker-pool2-t82] master.RegionStates: 
> Transition null to {d55ef81c2f8299abbddfce0445067830 state=SPLITTING_NEW, 
> ts=1478579001186, server=node-3.cluster,16020,1478578389506}
> {noformat}
> *The RegionServer creates it*
> {noformat}
> 2016-11-08 04:23:26,035 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for GomnU: blockCache=LruBlockCache{blockCount=34, 
> currentSize=14996112, freeSize=12823716208, maxSize=12838712320, 
> heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,038 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for big: blockCache=LruBlockCache{blockCount=34, 
> currentSize=14996112, freeSize=12823716208, maxSize=12838712320, 
> heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,442 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for meta: blockCache=LruBlockCache{blockCount=63, 
> currentSize=17187656, freeSize=12821524664, maxSize=12838712320, 
> heapSize=17187656, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,713 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for nwmrW: blockCache=LruBlockCache{blockCount=96, 
> currentSize=19178440, freeSize=12819533880, maxSize=12838712320, 
> heapSize=19178440, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,715 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for piwbr: blockCache=LruBlockCache{blockCount=96, 
> currentSize=19178440, freeSize=12819533880, maxSize=12838712320, 
> heapSize=19178440, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, 

[jira] [Commented] (HBASE-17682) Region stuck in merging_new state indefinitely

2017-02-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881780#comment-15881780
 ] 

Hudson commented on HBASE-17682:


FAILURE: Integrated in Jenkins build HBase-1.4 #644 (See 
[https://builds.apache.org/job/HBase-1.4/644/])
HBASE-17682 Region stuck in merging_new state indefinitely (apurtell: rev 
b0780bdc63aa8c5858c85ee4b6f999fd8a624a29)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java


> Region stuck in merging_new state indefinitely
> --
>
> Key: HBASE-17682
> URL: https://issues.apache.org/jira/browse/HBASE-17682
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.5, 1.1.10
>
> Attachments: HBASE-17682.branch-1.3.001.patch, 
> HBASE-17682.master.001.patch
>
>
> Ran into issue while tinkering around with a chaos monkey that did splits, 
> merges and kills exclusively, which resulted in regions getting stuck in 
> transition in merging new state indefinitely which i think happens when the 
> rs is killed during the merge but before the ponr, in which case the new 
> regions state in master is merging new. When the rs dies at this point the 
> master executes RegionStates.serverOffline() for the rs which does
> {code}
> for (RegionState state : regionsInTransition.values()) {
> HRegionInfo hri = state.getRegion();
> if (assignedRegions.contains(hri)) {
>   // Region is open on this region server, but in transition.
>   // This region must be moving away from this server, or 
> splitting/merging.
>   // SSH will handle it, either skip assigning, or re-assign.
>   LOG.info("Transitioning " + state + " will be handled by 
> ServerCrashProcedure for " + sn);
> } else if (sn.equals(state.getServerName())) {
>   // Region is in transition on this region server, and this
>   // region is not open on this server. So the region must be
>   // moving to this server from another one (i.e. opening or
>   // pending open on this server, was open on another one.
>   // Offline state is also kind of pending open if the region is in
>   // transition. The region could be in failed_close state too if we 
> have
>   // tried several times to open it while this region server is not 
> reachable)
>   if (state.isPendingOpenOrOpening() || state.isFailedClose() || 
> state.isOffline()) {
> LOG.info("Found region in " + state +
>   " to be reassigned by ServerCrashProcedure for " + sn);
> rits.add(hri);
>   } else if(state.isSplittingNew()) {
> regionsToCleanIfNoMetaEntry.add(state.getRegion());
>   } else {
> LOG.warn("THIS SHOULD NOT HAPPEN: unexpected " + state);
>   }
> }
>   }
> {code}
> We donot handle merging new here and end up with "THIS SHOULD NOT HAPPEN: 
> unexpected ...". Post this we have the new region which does not have any 
> data stuck which leads to the balancer not running.
> I think we should handle mergingnew the same way as splittingnew. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17460) enable_table_replication can not perform cyclic replication of a table

2017-02-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881764#comment-15881764
 ] 

Ted Yu commented on HBASE-17460:


Will commit addendum tomorrow morning if there is no objection.

> enable_table_replication can not perform cyclic replication of a table
> --
>
> Key: HBASE-17460
> URL: https://issues.apache.org/jira/browse/HBASE-17460
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: NITIN VERMA
>Assignee: NITIN VERMA
>  Labels: incompatibleChange, replication
> Fix For: 2.0.0
>
> Attachments: 17460-addendum.txt, 17460.branch-1.v3.txt, 17460.v5.txt, 
> HBASE-17460.patch, HBASE-17460_v2.patch, HBASE-17460_v3.patch, 
> HBASE-17460_v4.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> The enable_table_replication operation is broken for cyclic replication of 
> HBase table as we compare all the properties of column families (including 
> REPLICATION_SCOPE). 
> Below is exactly what happens:
> 1.  Running "enable_table_replication 'table1'  " opeartion on first cluster 
> will set the REPLICATION_SCOPE of all column families to peer id '1'. This 
> will also create a table on second cluster where REPLICATION_SCOPE is still 
> set to peer id '0'.
> 2. Now when we run "enable_table_replication 'table1'" on second cluster, we 
> compare all the properties of table (including REPLICATION_SCOPE_, which 
> obviously is different now. 
> I am proposing a fix for this issue where we should avoid comparing 
> REPLICATION_SCOPE inside HColumnDescriotor::compareTo() method, especially 
> when replication is not already enabled on the desired table.
> I have made that change and it is working. I will submit the patch soon.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17312) [JDK8] Use default method for Observer Coprocessors

2017-02-23 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-17312:
-
Attachment: HBASE-17312.master.004.patch

> [JDK8] Use default method for Observer Coprocessors
> ---
>
> Key: HBASE-17312
> URL: https://issues.apache.org/jira/browse/HBASE-17312
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>Assignee: Appy
>  Labels: incompatible
> Attachments: HBASE-17312.master.001.patch, 
> HBASE-17312.master.001.patch, HBASE-17312.master.002.patch, 
> HBASE-17312.master.003.patch, HBASE-17312.master.004.patch
>
>
> In cases where one might need to use multiple observers, say region, master 
> and regionserver; and the fact that only one class can be extended, it gives 
> rise to following pattern:
> {noformat}
> public class BaseMasterAndRegionObserver
>   extends BaseRegionObserver
>   implements MasterObserver
> class AccessController
>   extends BaseMasterAndRegionObserver
>   implements RegionServerObserver
> {noformat}
> were BaseMasterAndRegionObserver is full copy of BaseMasterObserver.
>  There is an example of simple case too where the current design fails.
>  Say only one observer is needed by the coprocessor, but the design doesn't 
> permit extending even that single observer (see RSGroupAdminEndpoint), that 
> leads to full copy of Base...Observer class into coprocessor class leading to 
> 1000s of lines of code and this ugly mix of 5 main functions with 100 useless 
> functions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17069) RegionServer writes invalid META entries for split daughters in some circumstances

2017-02-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881753#comment-15881753
 ] 

Hudson commented on HBASE-17069:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK8 #123 (See 
[https://builds.apache.org/job/HBase-1.3-JDK8/123/])
Amend HBASE-17069 RegionServer writes invalid META entries in some (apurtell: 
rev 9d9decb4d9179cc64e3a5d9753376ed69c517a0a)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java


> RegionServer writes invalid META entries for split daughters in some 
> circumstances
> --
>
> Key: HBASE-17069
> URL: https://issues.apache.org/jira/browse/HBASE-17069
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.2.4
>Reporter: Andrew Purtell
>Assignee: Abhishek Singh Chouhan
>Priority: Blocker
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.5
>
> Attachments: daughter_1_d55ef81c2f8299abbddfce0445067830.log, 
> daughter_2_08629d59564726da2497f70451aafcdb.log, 
> HBASE-17069-addendum.branch-1.3.001.patch, HBASE-17069.branch-1.3.001.patch, 
> HBASE-17069.branch-1.3.002.patch, HBASE-17069.master.001.patch, logs.tar.gz, 
> parent-393d2bfd8b1c52ce08540306659624f2.log
>
>
> I have been seeing frequent ITBLL failures testing various versions of 1.2.x. 
> Over the lifetime of 1.2.x the following issues have been fixed:
> - HBASE-15315 (Remove always set super user call as high priority)
> - HBASE-16093 (Fix splits failed before creating daughter regions leave meta 
> inconsistent)
> And this one is pending:
> - HBASE-17044 (Fix merge failed before creating merged region leaves meta 
> inconsistent)
> I can apply all of the above to branch-1.2 and still see this failure: 
> *The life of stillborn region d55ef81c2f8299abbddfce0445067830*
> *Master sees SPLITTING_NEW*
> {noformat}
> 2016-11-08 04:23:21,186 INFO  [AM.ZK.Worker-pool2-t82] master.RegionStates: 
> Transition null to {d55ef81c2f8299abbddfce0445067830 state=SPLITTING_NEW, 
> ts=1478579001186, server=node-3.cluster,16020,1478578389506}
> {noformat}
> *The RegionServer creates it*
> {noformat}
> 2016-11-08 04:23:26,035 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for GomnU: blockCache=LruBlockCache{blockCount=34, 
> currentSize=14996112, freeSize=12823716208, maxSize=12838712320, 
> heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,038 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for big: blockCache=LruBlockCache{blockCount=34, 
> currentSize=14996112, freeSize=12823716208, maxSize=12838712320, 
> heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,442 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for meta: blockCache=LruBlockCache{blockCount=63, 
> currentSize=17187656, freeSize=12821524664, maxSize=12838712320, 
> heapSize=17187656, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,713 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for nwmrW: blockCache=LruBlockCache{blockCount=96, 
> currentSize=19178440, freeSize=12819533880, maxSize=12838712320, 
> heapSize=19178440, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,715 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for piwbr: blockCache=LruBlockCache{blockCount=96, 
> currentSize=19178440, freeSize=12819533880, maxSize=12838712320, 
> heapSize=19178440, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, 

[jira] [Commented] (HBASE-17069) RegionServer writes invalid META entries for split daughters in some circumstances

2017-02-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881716#comment-15881716
 ] 

Hudson commented on HBASE-17069:


SUCCESS: Integrated in Jenkins build HBase-1.2-JDK8 #100 (See 
[https://builds.apache.org/job/HBase-1.2-JDK8/100/])
Amend HBASE-17069 RegionServer writes invalid META entries in some (apurtell: 
rev fe00b59a3f11a0fffad271934d56ad8f733ca86b)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java


> RegionServer writes invalid META entries for split daughters in some 
> circumstances
> --
>
> Key: HBASE-17069
> URL: https://issues.apache.org/jira/browse/HBASE-17069
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.2.4
>Reporter: Andrew Purtell
>Assignee: Abhishek Singh Chouhan
>Priority: Blocker
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.5
>
> Attachments: daughter_1_d55ef81c2f8299abbddfce0445067830.log, 
> daughter_2_08629d59564726da2497f70451aafcdb.log, 
> HBASE-17069-addendum.branch-1.3.001.patch, HBASE-17069.branch-1.3.001.patch, 
> HBASE-17069.branch-1.3.002.patch, HBASE-17069.master.001.patch, logs.tar.gz, 
> parent-393d2bfd8b1c52ce08540306659624f2.log
>
>
> I have been seeing frequent ITBLL failures testing various versions of 1.2.x. 
> Over the lifetime of 1.2.x the following issues have been fixed:
> - HBASE-15315 (Remove always set super user call as high priority)
> - HBASE-16093 (Fix splits failed before creating daughter regions leave meta 
> inconsistent)
> And this one is pending:
> - HBASE-17044 (Fix merge failed before creating merged region leaves meta 
> inconsistent)
> I can apply all of the above to branch-1.2 and still see this failure: 
> *The life of stillborn region d55ef81c2f8299abbddfce0445067830*
> *Master sees SPLITTING_NEW*
> {noformat}
> 2016-11-08 04:23:21,186 INFO  [AM.ZK.Worker-pool2-t82] master.RegionStates: 
> Transition null to {d55ef81c2f8299abbddfce0445067830 state=SPLITTING_NEW, 
> ts=1478579001186, server=node-3.cluster,16020,1478578389506}
> {noformat}
> *The RegionServer creates it*
> {noformat}
> 2016-11-08 04:23:26,035 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for GomnU: blockCache=LruBlockCache{blockCount=34, 
> currentSize=14996112, freeSize=12823716208, maxSize=12838712320, 
> heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,038 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for big: blockCache=LruBlockCache{blockCount=34, 
> currentSize=14996112, freeSize=12823716208, maxSize=12838712320, 
> heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,442 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for meta: blockCache=LruBlockCache{blockCount=63, 
> currentSize=17187656, freeSize=12821524664, maxSize=12838712320, 
> heapSize=17187656, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,713 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for nwmrW: blockCache=LruBlockCache{blockCount=96, 
> currentSize=19178440, freeSize=12819533880, maxSize=12838712320, 
> heapSize=19178440, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,715 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for piwbr: blockCache=LruBlockCache{blockCount=96, 
> currentSize=19178440, freeSize=12819533880, maxSize=12838712320, 
> heapSize=19178440, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, 

[jira] [Commented] (HBASE-17662) Disable in-memory flush when replaying from WAL

2017-02-23 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881686#comment-15881686
 ] 

Anoop Sam John commented on HBASE-17662:


I was just asking whether the WAL replay handling (And so this boolean write 
and read) being done by single thread only or not.  Did not read the code.   My 
worry on the 1st patch was that, it was doing a volatile read on every Cell 
addition..  Now we have changed it so that once the in memory flush size limit 
reached, we will do the boolean read to confirm it is not a replay time.  So we 
are ok IMO.Understood u made it AtomicBoolean in a preventive way.  Am ok 
with that way also provided we moved the boolean check as 2nd check now.

> Disable in-memory flush when replaying from WAL
> ---
>
> Key: HBASE-17662
> URL: https://issues.apache.org/jira/browse/HBASE-17662
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-17662-V02.patch, HBASE-17662-V03.patch, 
> HBASE-17662-V04.patch, HBASE-17662-V05.patch
>
>
> When replaying the edits from WAL, the region's updateLock is not taken, 
> because a single threaded action is assumed. However, the thread-safeness of 
> the in-memory flush of CompactingMemStore is based on taking the region's 
> updateLock. 
> The in-memory flush can be skipped in the replay time (anyway everything is 
> flushed to disk just after the replay). Therefore it is acceptable to just 
> skip the in-memory flush action while the updates come as part of replay from 
> WAL.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17680) Run mini cluster through JNI in tests

2017-02-23 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881683#comment-15881683
 ] 

Enis Soztutar commented on HBASE-17680:
---

bq. Not sure of why we should create NativeTestingUtility - it would be wrapper 
around HTU and is written in Java, not C++.
The difference is that you do not need to have to code in the JNI layer and the 
code will be significantly less, as well as more maintainable. The only thing 
you need to pass back is a return results. For example, in the tests, usually 
we create a table with a given family, and put some data. You just need two 
methods which directly forward the request to the java class in this case, no 
need to import Put object, Table object, Admin, etc. 


> Run mini cluster through JNI in tests
> -
>
> Key: HBASE-17680
> URL: https://issues.apache.org/jira/browse/HBASE-17680
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 17680.v1.txt, 17680.v3.txt, 17680.v8.txt
>
>
> Currently tests start local hbase cluster through hbase shell.
> There is less control over the configuration of the local cluster this way.
> This issue would replace hbase shell with JNI interface to mini cluster.
> We would have full control over the cluster behavior.
> Thanks to [~devaraj] who started this initiative.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17680) Run mini cluster through JNI in tests

2017-02-23 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881670#comment-15881670
 ] 

Enis Soztutar commented on HBASE-17680:
---

Couple of more comments: 
 - At least some of the methods are C-style, and rest are C++ style. We can 
just stick with C++ and encapsulate everything inside the class. 
 - Also, you can use much nicer ifstream instead of fopen / flose, and use 
std::string instead of mallocing c-strings. You can use {{string::c_str()}} to 
pass back to the JVM args. 
 - Instead of {{(*env).Foo()}}, you should use {{env->Foo()}}. 


> Run mini cluster through JNI in tests
> -
>
> Key: HBASE-17680
> URL: https://issues.apache.org/jira/browse/HBASE-17680
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 17680.v1.txt, 17680.v3.txt, 17680.v8.txt
>
>
> Currently tests start local hbase cluster through hbase shell.
> There is less control over the configuration of the local cluster this way.
> This issue would replace hbase shell with JNI interface to mini cluster.
> We would have full control over the cluster behavior.
> Thanks to [~devaraj] who started this initiative.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17680) Run mini cluster through JNI in tests

2017-02-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881652#comment-15881652
 ] 

Ted Yu commented on HBASE-17680:


Not sure of why we should create NativeTestingUtility - it would be wrapper 
around HTU and is written in Java, not C++.
When native client calls NativeTestingUtility, same translation needs to be 
carried out.

bq. maybe merge mini-cluster to test-util.cc

I can do the above.

> Run mini cluster through JNI in tests
> -
>
> Key: HBASE-17680
> URL: https://issues.apache.org/jira/browse/HBASE-17680
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 17680.v1.txt, 17680.v3.txt, 17680.v8.txt
>
>
> Currently tests start local hbase cluster through hbase shell.
> There is less control over the configuration of the local cluster this way.
> This issue would replace hbase shell with JNI interface to mini cluster.
> We would have full control over the cluster behavior.
> Thanks to [~devaraj] who started this initiative.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17460) enable_table_replication can not perform cyclic replication of a table

2017-02-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17460:
---
Attachment: 17460-addendum.txt

> enable_table_replication can not perform cyclic replication of a table
> --
>
> Key: HBASE-17460
> URL: https://issues.apache.org/jira/browse/HBASE-17460
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: NITIN VERMA
>Assignee: NITIN VERMA
>  Labels: incompatibleChange, replication
> Fix For: 2.0.0
>
> Attachments: 17460-addendum.txt, 17460.branch-1.v3.txt, 17460.v5.txt, 
> HBASE-17460.patch, HBASE-17460_v2.patch, HBASE-17460_v3.patch, 
> HBASE-17460_v4.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> The enable_table_replication operation is broken for cyclic replication of 
> HBase table as we compare all the properties of column families (including 
> REPLICATION_SCOPE). 
> Below is exactly what happens:
> 1.  Running "enable_table_replication 'table1'  " opeartion on first cluster 
> will set the REPLICATION_SCOPE of all column families to peer id '1'. This 
> will also create a table on second cluster where REPLICATION_SCOPE is still 
> set to peer id '0'.
> 2. Now when we run "enable_table_replication 'table1'" on second cluster, we 
> compare all the properties of table (including REPLICATION_SCOPE_, which 
> obviously is different now. 
> I am proposing a fix for this issue where we should avoid comparing 
> REPLICATION_SCOPE inside HColumnDescriotor::compareTo() method, especially 
> when replication is not already enabled on the desired table.
> I have made that change and it is working. I will submit the patch soon.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17680) Run mini cluster through JNI in tests

2017-02-23 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881646#comment-15881646
 ] 

Enis Soztutar commented on HBASE-17680:
---

Thanks for working on this [~devaraj] and [~ted_yu]. 
This helps a lot in reducing the test execution time as well, since we do not 
have to wait for java and jruby instantiation multiple times. If I interpret 
the above, the run times goes from ~60 secs to <10 secs. Plus, we need the 
multi-regionserver capabilities as well as killing servers, etc for the native 
client tests which are already available in the mini hbase cluster. 

For the patch: 
 - Can we move the mini-cluster to be under {{test-util}} module? It does not 
belong in core. 
 - I really like the way that we can call any method from HTU, Connection, 
Admin, Table etc, but most of the code in mini-cluster.cc is unnecessarily in 
the cpp side. Can we do this instead. Create a NativeTestingUtility.java in 
hbase-server, and also maybe merge mini-cluster to test-util.cc. The java 
counter-part will contain almost all of the code that we need to invoke from 
native (like writeConf(), create_table(), etc in the mini-cluster. This will be 
better, because almost all of the code will be in the Java side which is way 
more maintainable. mini-cluster.cc will just call the corresponding java method 
in NativeTestingUtility class. 
 - Notice that methods like tablePut, create_table, etc will be dramatically 
simpler. 
 - Method names in cpp usually use camel case with initial upper case, so 
methods like {{start_cluster()}} should be named {{StartCluster()}}. 
 - You also need to call {{DestroyJavaVM()}} once the testing is done. Maybe 
add it to Shutdown() or something. 
 - I think this {{+compiler_flags = ['-I', 
'/usr/lib/jvm/java-8-openjdk-amd64/include/', '-I', 
'/usr/lib/jvm/java-8-openjdk-amd64/include/linux'],
}} assumes that we are running in the docker container. Our builds are weird, 
where there is the buck based build and the make based build. However, both are 
supposed to work without the docker env. So maybe we need to source the 
JAVA_HOME. 

> Run mini cluster through JNI in tests
> -
>
> Key: HBASE-17680
> URL: https://issues.apache.org/jira/browse/HBASE-17680
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 17680.v1.txt, 17680.v3.txt, 17680.v8.txt
>
>
> Currently tests start local hbase cluster through hbase shell.
> There is less control over the configuration of the local cluster this way.
> This issue would replace hbase shell with JNI interface to mini cluster.
> We would have full control over the cluster behavior.
> Thanks to [~devaraj] who started this initiative.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17460) enable_table_replication can not perform cyclic replication of a table

2017-02-23 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881617#comment-15881617
 ] 

Enis Soztutar commented on HBASE-17460:
---

Sorry to come in late. Can you please address these review comments as well. An 
addendum is fine, otherwise, we need to revert again 
 - {{copyReplicationScope()}} should be private. 
 - {{copyReplicationScope()}}
 - In Java convention, the open parenthesis should be in the same line: 
{code} 
 +  public int copyReplicationScope(final HTableDescriptor localHtd)
+  {
{code}
and 
{code}
+if (remoteHCDName.equals(localHCDName))
+{
{code}
- copyReplicationScope() should return a boolean instead.  
- You don't check whether there are equal number of column families in the 
HTDs. Iterating like this will not fail if either of them contains smaller 
number of column families. 
{code}
while (remoteHCDIter.hasNext() && localHCDIter.hasNext()) { 
{code}
 - The following
{code}
if (result == true) {
{code}
should be {{ if (result) }}
 - The methods compareForReplication, etc should not be in HTD. They are very 
replication specific. Can we please move them to a replication-related utility 
class, or keep them private in Admin. 
 - compareForReplication() should take HTD as an argument. No need to take a 
generic object (unlike the generic equals() method). 



> enable_table_replication can not perform cyclic replication of a table
> --
>
> Key: HBASE-17460
> URL: https://issues.apache.org/jira/browse/HBASE-17460
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: NITIN VERMA
>Assignee: NITIN VERMA
>  Labels: incompatibleChange, replication
> Fix For: 2.0.0
>
> Attachments: 17460.branch-1.v3.txt, 17460.v5.txt, HBASE-17460.patch, 
> HBASE-17460_v2.patch, HBASE-17460_v3.patch, HBASE-17460_v4.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> The enable_table_replication operation is broken for cyclic replication of 
> HBase table as we compare all the properties of column families (including 
> REPLICATION_SCOPE). 
> Below is exactly what happens:
> 1.  Running "enable_table_replication 'table1'  " opeartion on first cluster 
> will set the REPLICATION_SCOPE of all column families to peer id '1'. This 
> will also create a table on second cluster where REPLICATION_SCOPE is still 
> set to peer id '0'.
> 2. Now when we run "enable_table_replication 'table1'" on second cluster, we 
> compare all the properties of table (including REPLICATION_SCOPE_, which 
> obviously is different now. 
> I am proposing a fix for this issue where we should avoid comparing 
> REPLICATION_SCOPE inside HColumnDescriotor::compareTo() method, especially 
> when replication is not already enabled on the desired table.
> I have made that change and it is working. I will submit the patch soon.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-17460) enable_table_replication can not perform cyclic replication of a table

2017-02-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881599#comment-15881599
 ] 

Ted Yu edited comment on HBASE-17460 at 2/24/17 12:20 AM:
--

checkAndSyncTableDescToPeers() is called by enableTableRep() in 1.x.
For branch-1 backport, how about adding the following method:
{code}
  public void enableTableRep(final TableName tableName, boolean 
throwExOnParitalEnable) throws IOException {
{code}
Default value for throwExOnParitalEnable would be false - not throwing 
IllegalArgumentException.

Ruby script can pass the flag to Java API so that user has control over this 
aspect.


was (Author: yuzhih...@gmail.com):
checkAndSyncTableDescToPeers() is called by enableTableReplication().
For branch-1 backport, how about adding the following method:
{code}
  public void enableTableReplication(final TableName tableName, boolean 
throwExOnParitalEnable) throws IOException {
{code}
Default value for throwExOnParitalEnable would be false - not throwing 
IllegalArgumentException.

Ruby script can pass the flag to Java API so that user has control over this 
aspect.

> enable_table_replication can not perform cyclic replication of a table
> --
>
> Key: HBASE-17460
> URL: https://issues.apache.org/jira/browse/HBASE-17460
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: NITIN VERMA
>Assignee: NITIN VERMA
>  Labels: incompatibleChange, replication
> Fix For: 2.0.0
>
> Attachments: 17460.branch-1.v3.txt, 17460.v5.txt, HBASE-17460.patch, 
> HBASE-17460_v2.patch, HBASE-17460_v3.patch, HBASE-17460_v4.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> The enable_table_replication operation is broken for cyclic replication of 
> HBase table as we compare all the properties of column families (including 
> REPLICATION_SCOPE). 
> Below is exactly what happens:
> 1.  Running "enable_table_replication 'table1'  " opeartion on first cluster 
> will set the REPLICATION_SCOPE of all column families to peer id '1'. This 
> will also create a table on second cluster where REPLICATION_SCOPE is still 
> set to peer id '0'.
> 2. Now when we run "enable_table_replication 'table1'" on second cluster, we 
> compare all the properties of table (including REPLICATION_SCOPE_, which 
> obviously is different now. 
> I am proposing a fix for this issue where we should avoid comparing 
> REPLICATION_SCOPE inside HColumnDescriotor::compareTo() method, especially 
> when replication is not already enabled on the desired table.
> I have made that change and it is working. I will submit the patch soon.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17460) enable_table_replication can not perform cyclic replication of a table

2017-02-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881599#comment-15881599
 ] 

Ted Yu commented on HBASE-17460:


checkAndSyncTableDescToPeers() is called by enableTableReplication().
For branch-1 backport, how about adding the following method:
{code}
  public void enableTableReplication(final TableName tableName, boolean 
throwExOnParitalEnable) throws IOException {
{code}
Default value for throwExOnParitalEnable would be false - not throwing 
IllegalArgumentException.

Ruby script can pass the flag to Java API so that user has control over this 
aspect.

> enable_table_replication can not perform cyclic replication of a table
> --
>
> Key: HBASE-17460
> URL: https://issues.apache.org/jira/browse/HBASE-17460
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: NITIN VERMA
>Assignee: NITIN VERMA
>  Labels: incompatibleChange, replication
> Fix For: 2.0.0
>
> Attachments: 17460.branch-1.v3.txt, 17460.v5.txt, HBASE-17460.patch, 
> HBASE-17460_v2.patch, HBASE-17460_v3.patch, HBASE-17460_v4.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> The enable_table_replication operation is broken for cyclic replication of 
> HBase table as we compare all the properties of column families (including 
> REPLICATION_SCOPE). 
> Below is exactly what happens:
> 1.  Running "enable_table_replication 'table1'  " opeartion on first cluster 
> will set the REPLICATION_SCOPE of all column families to peer id '1'. This 
> will also create a table on second cluster where REPLICATION_SCOPE is still 
> set to peer id '0'.
> 2. Now when we run "enable_table_replication 'table1'" on second cluster, we 
> compare all the properties of table (including REPLICATION_SCOPE_, which 
> obviously is different now. 
> I am proposing a fix for this issue where we should avoid comparing 
> REPLICATION_SCOPE inside HColumnDescriotor::compareTo() method, especially 
> when replication is not already enabled on the desired table.
> I have made that change and it is working. I will submit the patch soon.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17662) Disable in-memory flush when replaying from WAL

2017-02-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881591#comment-15881591
 ] 

Hadoop QA commented on HBASE-17662:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
30s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
42s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 30s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 24s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 139m 0s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12854293/HBASE-17662-V05.patch 
|
| JIRA Issue | HBASE-17662 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux fe16eff01be7 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 8fb44fa |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5817/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5817/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5817/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Disable in-memory flush when replaying from WAL
> ---
>
> Key: HBASE-17662
> URL: https://issues.apache.org/jira/browse/HBASE-17662
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: 

[jira] [Updated] (HBASE-17686) Improve Javadoc comments in Observer Interfaces

2017-02-23 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-17686:

Component/s: documentation

> Improve Javadoc comments in Observer Interfaces
> ---
>
> Key: HBASE-17686
> URL: https://issues.apache.org/jira/browse/HBASE-17686
> Project: HBase
>  Issue Type: Improvement
>  Components: Coprocessors, documentation
>Affects Versions: 2.0.0
>Reporter: Zach York
>Assignee: Zach York
>Priority: Minor
>
> Based off of comments from https://issues.apache.org/jira/browse/HBASE-17312, 
> we should improve Javadoc comments in the Observer interfaces. This JIRA 
> includes adding @returns to clarify what is being returned (and why) and to 
> either improve @params/@throws or remove if there is no way to provide 
> meaningful information.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17682) Region stuck in merging_new state indefinitely

2017-02-23 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-17682:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.1.10
   1.2.5
   1.3.1
   1.4.0
   2.0.0
   Status: Resolved  (was: Patch Available)

Relevant for branch-1.1 and up, applied to all

> Region stuck in merging_new state indefinitely
> --
>
> Key: HBASE-17682
> URL: https://issues.apache.org/jira/browse/HBASE-17682
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.5, 1.1.10
>
> Attachments: HBASE-17682.branch-1.3.001.patch, 
> HBASE-17682.master.001.patch
>
>
> Ran into issue while tinkering around with a chaos monkey that did splits, 
> merges and kills exclusively, which resulted in regions getting stuck in 
> transition in merging new state indefinitely which i think happens when the 
> rs is killed during the merge but before the ponr, in which case the new 
> regions state in master is merging new. When the rs dies at this point the 
> master executes RegionStates.serverOffline() for the rs which does
> {code}
> for (RegionState state : regionsInTransition.values()) {
> HRegionInfo hri = state.getRegion();
> if (assignedRegions.contains(hri)) {
>   // Region is open on this region server, but in transition.
>   // This region must be moving away from this server, or 
> splitting/merging.
>   // SSH will handle it, either skip assigning, or re-assign.
>   LOG.info("Transitioning " + state + " will be handled by 
> ServerCrashProcedure for " + sn);
> } else if (sn.equals(state.getServerName())) {
>   // Region is in transition on this region server, and this
>   // region is not open on this server. So the region must be
>   // moving to this server from another one (i.e. opening or
>   // pending open on this server, was open on another one.
>   // Offline state is also kind of pending open if the region is in
>   // transition. The region could be in failed_close state too if we 
> have
>   // tried several times to open it while this region server is not 
> reachable)
>   if (state.isPendingOpenOrOpening() || state.isFailedClose() || 
> state.isOffline()) {
> LOG.info("Found region in " + state +
>   " to be reassigned by ServerCrashProcedure for " + sn);
> rits.add(hri);
>   } else if(state.isSplittingNew()) {
> regionsToCleanIfNoMetaEntry.add(state.getRegion());
>   } else {
> LOG.warn("THIS SHOULD NOT HAPPEN: unexpected " + state);
>   }
> }
>   }
> {code}
> We donot handle merging new here and end up with "THIS SHOULD NOT HAPPEN: 
> unexpected ...". Post this we have the new region which does not have any 
> data stuck which leads to the balancer not running.
> I think we should handle mergingnew the same way as splittingnew. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17460) enable_table_replication can not perform cyclic replication of a table

2017-02-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881529#comment-15881529
 ] 

Ted Yu commented on HBASE-17460:


[~nitin.ve...@gmail.com]:
Janos has some bandwidth.
Is it Okay if Janos works on the backport ?

Thanks

> enable_table_replication can not perform cyclic replication of a table
> --
>
> Key: HBASE-17460
> URL: https://issues.apache.org/jira/browse/HBASE-17460
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: NITIN VERMA
>Assignee: NITIN VERMA
>  Labels: incompatibleChange, replication
> Fix For: 2.0.0
>
> Attachments: 17460.branch-1.v3.txt, 17460.v5.txt, HBASE-17460.patch, 
> HBASE-17460_v2.patch, HBASE-17460_v3.patch, HBASE-17460_v4.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> The enable_table_replication operation is broken for cyclic replication of 
> HBase table as we compare all the properties of column families (including 
> REPLICATION_SCOPE). 
> Below is exactly what happens:
> 1.  Running "enable_table_replication 'table1'  " opeartion on first cluster 
> will set the REPLICATION_SCOPE of all column families to peer id '1'. This 
> will also create a table on second cluster where REPLICATION_SCOPE is still 
> set to peer id '0'.
> 2. Now when we run "enable_table_replication 'table1'" on second cluster, we 
> compare all the properties of table (including REPLICATION_SCOPE_, which 
> obviously is different now. 
> I am proposing a fix for this issue where we should avoid comparing 
> REPLICATION_SCOPE inside HColumnDescriotor::compareTo() method, especially 
> when replication is not already enabled on the desired table.
> I have made that change and it is working. I will submit the patch soon.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17662) Disable in-memory flush when replaying from WAL

2017-02-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881524#comment-15881524
 ] 

stack commented on HBASE-17662:
---

Ok. Thanks for explanation

Will the thread that sets the state be same as the one reading it? Is this what 
single-threaded presumption around wal replay means? If single-threaded why are 
there concerns around in-memory flush? It only works if update lock taken?  
(Flag can't be volatile; that'd be too expensive if we have to check it on each 
update to memstore).

> Disable in-memory flush when replaying from WAL
> ---
>
> Key: HBASE-17662
> URL: https://issues.apache.org/jira/browse/HBASE-17662
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-17662-V02.patch, HBASE-17662-V03.patch, 
> HBASE-17662-V04.patch, HBASE-17662-V05.patch
>
>
> When replaying the edits from WAL, the region's updateLock is not taken, 
> because a single threaded action is assumed. However, the thread-safeness of 
> the in-memory flush of CompactingMemStore is based on taking the region's 
> updateLock. 
> The in-memory flush can be skipped in the replay time (anyway everything is 
> flushed to disk just after the replay). Therefore it is acceptable to just 
> skip the in-memory flush action while the updates come as part of replay from 
> WAL.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17069) RegionServer writes invalid META entries for split daughters in some circumstances

2017-02-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881509#comment-15881509
 ] 

Hudson commented on HBASE-17069:


SUCCESS: Integrated in Jenkins build HBase-1.3-IT #838 (See 
[https://builds.apache.org/job/HBase-1.3-IT/838/])
Amend HBASE-17069 RegionServer writes invalid META entries in some (apurtell: 
rev fe00b59a3f11a0fffad271934d56ad8f733ca86b)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java


> RegionServer writes invalid META entries for split daughters in some 
> circumstances
> --
>
> Key: HBASE-17069
> URL: https://issues.apache.org/jira/browse/HBASE-17069
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.2.4
>Reporter: Andrew Purtell
>Assignee: Abhishek Singh Chouhan
>Priority: Blocker
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.5
>
> Attachments: daughter_1_d55ef81c2f8299abbddfce0445067830.log, 
> daughter_2_08629d59564726da2497f70451aafcdb.log, 
> HBASE-17069-addendum.branch-1.3.001.patch, HBASE-17069.branch-1.3.001.patch, 
> HBASE-17069.branch-1.3.002.patch, HBASE-17069.master.001.patch, logs.tar.gz, 
> parent-393d2bfd8b1c52ce08540306659624f2.log
>
>
> I have been seeing frequent ITBLL failures testing various versions of 1.2.x. 
> Over the lifetime of 1.2.x the following issues have been fixed:
> - HBASE-15315 (Remove always set super user call as high priority)
> - HBASE-16093 (Fix splits failed before creating daughter regions leave meta 
> inconsistent)
> And this one is pending:
> - HBASE-17044 (Fix merge failed before creating merged region leaves meta 
> inconsistent)
> I can apply all of the above to branch-1.2 and still see this failure: 
> *The life of stillborn region d55ef81c2f8299abbddfce0445067830*
> *Master sees SPLITTING_NEW*
> {noformat}
> 2016-11-08 04:23:21,186 INFO  [AM.ZK.Worker-pool2-t82] master.RegionStates: 
> Transition null to {d55ef81c2f8299abbddfce0445067830 state=SPLITTING_NEW, 
> ts=1478579001186, server=node-3.cluster,16020,1478578389506}
> {noformat}
> *The RegionServer creates it*
> {noformat}
> 2016-11-08 04:23:26,035 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for GomnU: blockCache=LruBlockCache{blockCount=34, 
> currentSize=14996112, freeSize=12823716208, maxSize=12838712320, 
> heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,038 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for big: blockCache=LruBlockCache{blockCount=34, 
> currentSize=14996112, freeSize=12823716208, maxSize=12838712320, 
> heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,442 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for meta: blockCache=LruBlockCache{blockCount=63, 
> currentSize=17187656, freeSize=12821524664, maxSize=12838712320, 
> heapSize=17187656, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,713 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for nwmrW: blockCache=LruBlockCache{blockCount=96, 
> currentSize=19178440, freeSize=12819533880, maxSize=12838712320, 
> heapSize=19178440, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,715 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for piwbr: blockCache=LruBlockCache{blockCount=96, 
> currentSize=19178440, freeSize=12819533880, maxSize=12838712320, 
> heapSize=19178440, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, 

[jira] [Commented] (HBASE-17682) Region stuck in merging_new state indefinitely

2017-02-23 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881504#comment-15881504
 ] 

Andrew Purtell commented on HBASE-17682:


Ok, committing shortly


> Region stuck in merging_new state indefinitely
> --
>
> Key: HBASE-17682
> URL: https://issues.apache.org/jira/browse/HBASE-17682
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Attachments: HBASE-17682.branch-1.3.001.patch, 
> HBASE-17682.master.001.patch
>
>
> Ran into issue while tinkering around with a chaos monkey that did splits, 
> merges and kills exclusively, which resulted in regions getting stuck in 
> transition in merging new state indefinitely which i think happens when the 
> rs is killed during the merge but before the ponr, in which case the new 
> regions state in master is merging new. When the rs dies at this point the 
> master executes RegionStates.serverOffline() for the rs which does
> {code}
> for (RegionState state : regionsInTransition.values()) {
> HRegionInfo hri = state.getRegion();
> if (assignedRegions.contains(hri)) {
>   // Region is open on this region server, but in transition.
>   // This region must be moving away from this server, or 
> splitting/merging.
>   // SSH will handle it, either skip assigning, or re-assign.
>   LOG.info("Transitioning " + state + " will be handled by 
> ServerCrashProcedure for " + sn);
> } else if (sn.equals(state.getServerName())) {
>   // Region is in transition on this region server, and this
>   // region is not open on this server. So the region must be
>   // moving to this server from another one (i.e. opening or
>   // pending open on this server, was open on another one.
>   // Offline state is also kind of pending open if the region is in
>   // transition. The region could be in failed_close state too if we 
> have
>   // tried several times to open it while this region server is not 
> reachable)
>   if (state.isPendingOpenOrOpening() || state.isFailedClose() || 
> state.isOffline()) {
> LOG.info("Found region in " + state +
>   " to be reassigned by ServerCrashProcedure for " + sn);
> rits.add(hri);
>   } else if(state.isSplittingNew()) {
> regionsToCleanIfNoMetaEntry.add(state.getRegion());
>   } else {
> LOG.warn("THIS SHOULD NOT HAPPEN: unexpected " + state);
>   }
> }
>   }
> {code}
> We donot handle merging new here and end up with "THIS SHOULD NOT HAPPEN: 
> unexpected ...". Post this we have the new region which does not have any 
> data stuck which leads to the balancer not running.
> I think we should handle mergingnew the same way as splittingnew. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17069) RegionServer writes invalid META entries for split daughters in some circumstances

2017-02-23 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-17069:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed addendum to branch-1.2 and up, re-resolving

> RegionServer writes invalid META entries for split daughters in some 
> circumstances
> --
>
> Key: HBASE-17069
> URL: https://issues.apache.org/jira/browse/HBASE-17069
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.2.4
>Reporter: Andrew Purtell
>Assignee: Abhishek Singh Chouhan
>Priority: Blocker
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.5
>
> Attachments: daughter_1_d55ef81c2f8299abbddfce0445067830.log, 
> daughter_2_08629d59564726da2497f70451aafcdb.log, 
> HBASE-17069-addendum.branch-1.3.001.patch, HBASE-17069.branch-1.3.001.patch, 
> HBASE-17069.branch-1.3.002.patch, HBASE-17069.master.001.patch, logs.tar.gz, 
> parent-393d2bfd8b1c52ce08540306659624f2.log
>
>
> I have been seeing frequent ITBLL failures testing various versions of 1.2.x. 
> Over the lifetime of 1.2.x the following issues have been fixed:
> - HBASE-15315 (Remove always set super user call as high priority)
> - HBASE-16093 (Fix splits failed before creating daughter regions leave meta 
> inconsistent)
> And this one is pending:
> - HBASE-17044 (Fix merge failed before creating merged region leaves meta 
> inconsistent)
> I can apply all of the above to branch-1.2 and still see this failure: 
> *The life of stillborn region d55ef81c2f8299abbddfce0445067830*
> *Master sees SPLITTING_NEW*
> {noformat}
> 2016-11-08 04:23:21,186 INFO  [AM.ZK.Worker-pool2-t82] master.RegionStates: 
> Transition null to {d55ef81c2f8299abbddfce0445067830 state=SPLITTING_NEW, 
> ts=1478579001186, server=node-3.cluster,16020,1478578389506}
> {noformat}
> *The RegionServer creates it*
> {noformat}
> 2016-11-08 04:23:26,035 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for GomnU: blockCache=LruBlockCache{blockCount=34, 
> currentSize=14996112, freeSize=12823716208, maxSize=12838712320, 
> heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,038 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for big: blockCache=LruBlockCache{blockCount=34, 
> currentSize=14996112, freeSize=12823716208, maxSize=12838712320, 
> heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,442 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for meta: blockCache=LruBlockCache{blockCount=63, 
> currentSize=17187656, freeSize=12821524664, maxSize=12838712320, 
> heapSize=17187656, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,713 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for nwmrW: blockCache=LruBlockCache{blockCount=96, 
> currentSize=19178440, freeSize=12819533880, maxSize=12838712320, 
> heapSize=19178440, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,715 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for piwbr: blockCache=LruBlockCache{blockCount=96, 
> currentSize=19178440, freeSize=12819533880, maxSize=12838712320, 
> heapSize=19178440, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 

[jira] [Commented] (HBASE-16902) Remove directory layout/ filesystem references from hbck tool

2017-02-23 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881318#comment-15881318
 ] 

Umesh Agashe commented on HBASE-16902:
--

Hi [~water], great to hear that you can spend some time on this feature. We 
recently ( on Feb 17, 2017) had a discussion about the current status of this 
work. We are thinking about changing the approach for this work. Details are 
available in this doc: 
https://docs.google.com/document/d/128Q0BqJY7OvHMUpEpZWKCaBrH1qDjpxxOVkX2KM46No/edit#heading=h.iyja9q78fh2j.

The doc also has review board link to the POC code for the new approach. Let me 
know your thoughts about the new approach and the POC code.

Thanks, Umesh

> Remove directory layout/ filesystem references from hbck tool
> -
>
> Key: HBASE-16902
> URL: https://issues.apache.org/jira/browse/HBASE-16902
> Project: HBase
>  Issue Type: Sub-task
>  Components: Filesystem Integration
>Reporter: Umesh Agashe
>Assignee: Xiang Li
>
> Remove directory layout/ filesystem references from hbck tool. List of files:
> {code}
> hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
> hbase-server/src/main/java/org/apache/hadoop/hbase/util/hbck/HFileCorruptionChecker.java
> hbase-server/src/main/java/org/apache/hadoop/hbase/util/hbck/OfflineMetaRepair.java
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17662) Disable in-memory flush when replaying from WAL

2017-02-23 Thread Anastasia Braginsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anastasia Braginsky updated HBASE-17662:

Attachment: HBASE-17662-V05.patch

> Disable in-memory flush when replaying from WAL
> ---
>
> Key: HBASE-17662
> URL: https://issues.apache.org/jira/browse/HBASE-17662
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-17662-V02.patch, HBASE-17662-V03.patch, 
> HBASE-17662-V04.patch, HBASE-17662-V05.patch
>
>
> When replaying the edits from WAL, the region's updateLock is not taken, 
> because a single threaded action is assumed. However, the thread-safeness of 
> the in-memory flush of CompactingMemStore is based on taking the region's 
> updateLock. 
> The in-memory flush can be skipped in the replay time (anyway everything is 
> flushed to disk just after the replay). Therefore it is acceptable to just 
> skip the in-memory flush action while the updates come as part of replay from 
> WAL.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17069) RegionServer writes invalid META entries for split daughters in some circumstances

2017-02-23 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15881299#comment-15881299
 ] 

Andrew Purtell commented on HBASE-17069:


Ok, committing addendum shortly.

> RegionServer writes invalid META entries for split daughters in some 
> circumstances
> --
>
> Key: HBASE-17069
> URL: https://issues.apache.org/jira/browse/HBASE-17069
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.2.4
>Reporter: Andrew Purtell
>Assignee: Abhishek Singh Chouhan
>Priority: Blocker
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.5
>
> Attachments: daughter_1_d55ef81c2f8299abbddfce0445067830.log, 
> daughter_2_08629d59564726da2497f70451aafcdb.log, 
> HBASE-17069-addendum.branch-1.3.001.patch, HBASE-17069.branch-1.3.001.patch, 
> HBASE-17069.branch-1.3.002.patch, HBASE-17069.master.001.patch, logs.tar.gz, 
> parent-393d2bfd8b1c52ce08540306659624f2.log
>
>
> I have been seeing frequent ITBLL failures testing various versions of 1.2.x. 
> Over the lifetime of 1.2.x the following issues have been fixed:
> - HBASE-15315 (Remove always set super user call as high priority)
> - HBASE-16093 (Fix splits failed before creating daughter regions leave meta 
> inconsistent)
> And this one is pending:
> - HBASE-17044 (Fix merge failed before creating merged region leaves meta 
> inconsistent)
> I can apply all of the above to branch-1.2 and still see this failure: 
> *The life of stillborn region d55ef81c2f8299abbddfce0445067830*
> *Master sees SPLITTING_NEW*
> {noformat}
> 2016-11-08 04:23:21,186 INFO  [AM.ZK.Worker-pool2-t82] master.RegionStates: 
> Transition null to {d55ef81c2f8299abbddfce0445067830 state=SPLITTING_NEW, 
> ts=1478579001186, server=node-3.cluster,16020,1478578389506}
> {noformat}
> *The RegionServer creates it*
> {noformat}
> 2016-11-08 04:23:26,035 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for GomnU: blockCache=LruBlockCache{blockCount=34, 
> currentSize=14996112, freeSize=12823716208, maxSize=12838712320, 
> heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,038 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for big: blockCache=LruBlockCache{blockCount=34, 
> currentSize=14996112, freeSize=12823716208, maxSize=12838712320, 
> heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,442 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for meta: blockCache=LruBlockCache{blockCount=63, 
> currentSize=17187656, freeSize=12821524664, maxSize=12838712320, 
> heapSize=17187656, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,713 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for nwmrW: blockCache=LruBlockCache{blockCount=96, 
> currentSize=19178440, freeSize=12819533880, maxSize=12838712320, 
> heapSize=19178440, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,715 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for piwbr: blockCache=LruBlockCache{blockCount=96, 
> currentSize=19178440, freeSize=12819533880, maxSize=12838712320, 
> heapSize=19178440, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,717 INFO  
> 

[jira] [Updated] (HBASE-17133) Backup documentation update

2017-02-23 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-17133:
--
Description: 
We need to update backup doc to sync it with the current implementation and to 
add section for current limitations:
{quote}
- if you write to the table with Durability.SKIP_WALS your data will not
be in the incremental-backup
 - if you bulkload files that data will not be in the incremental backup
(HBASE-14417)
 - the incremental backup will not only contains the data of the table you
specified but also the regions from other tables that are on the same set
of RSs (HBASE-14141) ...maybe a note about security around this topic
 - the incremental backup will not contains just the "latest row" between
backup A and B, but it will also contains all the updates occurred in
between. but the restore does not allow you to restore up to a certain
point in time, the restore will always be up to the "latest backup point".
 - you should limit the number of "incremental" up to N (or maybe SIZE), to
avoid replay time becoming the bottleneck. (HBASE-14135)
{quote} 

Update command line tool section

Clarify restore backup section

Add section on backup delete algorithm

Add section on how backup image dependency chain works.

Add section for configuration

hbase.backup.enable=true
hbase.master.logcleaner.plugins=YOUR_PUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner
hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager
hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager

  was:
We need to update backup doc to sync it with the current implementation and to 
add section for current limitations:
{quote}
- if you write to the table with Durability.SKIP_WALS your data will not
be in the incremental-backup
 - if you bulkload files that data will not be in the incremental backup
(HBASE-14417)
 - the incremental backup will not only contains the data of the table you
specified but also the regions from other tables that are on the same set
of RSs (HBASE-14141) ...maybe a note about security around this topic
 - the incremental backup will not contains just the "latest row" between
backup A and B, but it will also contains all the updates occurred in
between. but the restore does not allow you to restore up to a certain
point in time, the restore will always be up to the "latest backup point".
 - you should limit the number of "incremental" up to N (or maybe SIZE), to
avoid replay time becoming the bottleneck. (HBASE-14135)
{quote} 

Update command line tool section

Clarify restore backup section

Add section on backup delete algorithm

Add section on how backup image dependency chain works.


> Backup documentation update
> ---
>
> Key: HBASE-17133
> URL: https://issues.apache.org/jira/browse/HBASE-17133
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
> Fix For: HBASE-7912
>
>
> We need to update backup doc to sync it with the current implementation and 
> to add section for current limitations:
> {quote}
> - if you write to the table with Durability.SKIP_WALS your data will not
> be in the incremental-backup
>  - if you bulkload files that data will not be in the incremental backup
> (HBASE-14417)
>  - the incremental backup will not only contains the data of the table you
> specified but also the regions from other tables that are on the same set
> of RSs (HBASE-14141) ...maybe a note about security around this topic
>  - the incremental backup will not contains just the "latest row" between
> backup A and B, but it will also contains all the updates occurred in
> between. but the restore does not allow you to restore up to a certain
> point in time, the restore will always be up to the "latest backup point".
>  - you should limit the number of "incremental" up to N (or maybe SIZE), to
> avoid replay time becoming the bottleneck. (HBASE-14135)
> {quote} 
> Update command line tool section
> Clarify restore backup section
> Add section on backup delete algorithm
> Add section on how backup image dependency chain works.
> Add section for configuration
> hbase.backup.enable=true
> hbase.master.logcleaner.plugins=YOUR_PUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner
> hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager
> hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17686) Improve Javadoc comments in Observer Interfaces

2017-02-23 Thread Zach York (JIRA)
Zach York created HBASE-17686:
-

 Summary: Improve Javadoc comments in Observer Interfaces
 Key: HBASE-17686
 URL: https://issues.apache.org/jira/browse/HBASE-17686
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors
Affects Versions: 2.0.0
Reporter: Zach York
Assignee: Zach York
Priority: Minor


Based off of comments from https://issues.apache.org/jira/browse/HBASE-17312, 
we should improve Javadoc comments in the Observer interfaces. This JIRA 
includes adding @returns to clarify what is being returned (and why) and to 
either improve @params/@throws or remove if there is no way to provide 
meaningful information.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17684) Tools/API to read favored nodes for region(s)

2017-02-23 Thread Thiruvel Thirumoolan (JIRA)
Thiruvel Thirumoolan created HBASE-17684:


 Summary: Tools/API to read favored nodes for region(s)
 Key: HBASE-17684
 URL: https://issues.apache.org/jira/browse/HBASE-17684
 Project: HBase
  Issue Type: Sub-task
  Components: FavoredNodes
Reporter: Thiruvel Thirumoolan
Assignee: Thiruvel Thirumoolan


We need APIs to read FN from Master. This will help in troubleshooting when 
regions are in RIT due to all FN being dead etc. For small clusters, we could 
just read from SnapshotOfRegionAssignmentFromMeta, but for large clusters it 
takes 4-5 mins.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17685) Tools/Admin API to dump the replica load of server(s)

2017-02-23 Thread Thiruvel Thirumoolan (JIRA)
Thiruvel Thirumoolan created HBASE-17685:


 Summary: Tools/Admin API to dump the replica load of server(s)
 Key: HBASE-17685
 URL: https://issues.apache.org/jira/browse/HBASE-17685
 Project: HBase
  Issue Type: Sub-task
  Components: FavoredNodes
Reporter: Thiruvel Thirumoolan
Assignee: Thiruvel Thirumoolan


RPM has an option to dump the favored node distribution. We need an API to get 
the replica load from master.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17683) Admin API to update favored nodes in Master

2017-02-23 Thread Thiruvel Thirumoolan (JIRA)
Thiruvel Thirumoolan created HBASE-17683:


 Summary: Admin API to update favored nodes in Master
 Key: HBASE-17683
 URL: https://issues.apache.org/jira/browse/HBASE-17683
 Project: HBase
  Issue Type: Sub-task
  Components: FavoredNodes
Reporter: Thiruvel Thirumoolan
Assignee: Thiruvel Thirumoolan


For troubleshooting/decommissioning nodes/replacing nodes, we need an API to 
update the FN for a set of regions in Master.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17680) Run mini cluster through JNI in tests

2017-02-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17680:
---
Attachment: (was: 17680.v8.txt)

> Run mini cluster through JNI in tests
> -
>
> Key: HBASE-17680
> URL: https://issues.apache.org/jira/browse/HBASE-17680
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 17680.v1.txt, 17680.v3.txt, 17680.v8.txt
>
>
> Currently tests start local hbase cluster through hbase shell.
> There is less control over the configuration of the local cluster this way.
> This issue would replace hbase shell with JNI interface to mini cluster.
> We would have full control over the cluster behavior.
> Thanks to [~devaraj] who started this initiative.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


  1   2   >