[jira] [Updated] (HBASE-13069) Thrift Http Server returning an error code of 500 instead of 401 when authentication fails.

2015-02-19 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-13069:

Attachment: HBASE-13069.patch

Re-attaching the patch for QA bot to pick up.

 Thrift Http Server returning an error code of 500 instead of 401 when 
 authentication fails.
 ---

 Key: HBASE-13069
 URL: https://issues.apache.org/jira/browse/HBASE-13069
 Project: HBase
  Issue Type: Bug
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Attachments: HBASE-13069.patch, HBASE-13069.patch


 As per description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13001) NullPointer in master logs for table.jsp

2015-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327145#comment-14327145
 ] 

Hadoop QA commented on HBASE-13001:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699617/HBASE-13001.patch
  against master branch at commit 31f17b17f0e2d12550b97098ec45ab59c5d98d58.
  ATTACHMENT ID: 12699617

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.
{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+tableHeader = h2Table Regions/h2table class=\table 
table-striped\trthName/ththRegion Server/ththStart 
Key/ththEnd 
Key/ththLocality/ththRequests/ththReplicaID/th/tr;
+tableHeader = h2Table Regions/h2table class=\table 
table-striped\trthName/ththRegion Server/ththStart 
Key/ththEnd Key/ththLocality/ththRequests/th/tr;

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestShell

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12906//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12906//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12906//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12906//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12906//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12906//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12906//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12906//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12906//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12906//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12906//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12906//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12906//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12906//console

This message is automatically generated.

 NullPointer in master logs for table.jsp
 

 Key: HBASE-13001
 URL: https://issues.apache.org/jira/browse/HBASE-13001
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 0.98.10
Reporter: Vikas Vishwakarma
Assignee: Vikas Vishwakarma
Priority: Trivial
 Fix For: 2.0.0

 Attachments: HBASE-13001.patch, Table_not_ready.png


 Seeing a NullPointer issue in master logs for table.jsp probably similar to 
 HBASE-6607
 {noformat}
 2015-02-09 14:04:00,622 ERROR org.mortbay.log: 

[jira] [Commented] (HBASE-13054) Provide more tracing information for locking/latching events.

2015-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328057#comment-14328057
 ] 

Hadoop QA commented on HBASE-13054:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699712/HBASE-13054_v2.patch
  against master branch at commit 18402cc850b143bc6f88d90e62c42b9ef4131ca6.
  ATTACHMENT ID: 12699712

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.
{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
1936 checkstyle errors (more than the master's current 1935 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.constraint.TestConstraint.testIsUnloaded(TestConstraint.java:223)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12912//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12912//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12912//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12912//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12912//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12912//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12912//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12912//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12912//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12912//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12912//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12912//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12912//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12912//console

This message is automatically generated.

 Provide more tracing information for locking/latching events.
 -

 Key: HBASE-13054
 URL: https://issues.apache.org/jira/browse/HBASE-13054
 Project: HBase
  Issue Type: Improvement
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-13054.patch, HBASE-13054_v2.patch


 Currently not much tracing information available for locking and latching 
 events like row level locking during do mini batch mutations, region level 
 locking during flush, close and so on. It will be better to add the trace 
 information for such events so that it will be useful for finding time spent 
 on locking and waiting time on locks while analyzing performance issues in 

[jira] [Updated] (HBASE-10900) FULL table backup and restore

2015-02-19 Thread Demai Ni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Demai Ni updated HBASE-10900:
-
Fix Version/s: (was: 1.1.0)
 Assignee: (was: Demai Ni)

 FULL table backup and restore
 -

 Key: HBASE-10900
 URL: https://issues.apache.org/jira/browse/HBASE-10900
 Project: HBase
  Issue Type: Task
Reporter: Demai Ni
 Attachments: HBASE-10900-fullbackup-trunk-v1.patch, 
 HBASE-10900-trunk-v2.patch, HBASE-10900-trunk-v3.patch, 
 HBASE-10900-trunk-v4.patch


 h2. Feature Description
 This is a subtask of 
 [HBase-7912|https://issues.apache.org/jira/browse/HBASE-7912] to support FULL 
 backup/restore, and will complete the following function:
 {code:title=Backup Restore example|borderStyle=solid}
 /* backup from sourcecluster to targetcluster 
  */
 /* if no table name specified, all tables from source cluster will be 
 backuped */
 [sourcecluster]$ hbase backup create full 
 hdfs://hostname.targetcluster.org:9000/userid/backupdir t1_dn,t2_dn,t3_dn
 /* restore on targetcluser, this is a local restore   
   */
 /* backup_1396650096738 - backup image name   
   */
 /* t1_dn,etc are the original table names. All tables will be restored if not 
 specified */
 /* t1_dn_restore, etc. are the restored table. if not specified, orginal 
 table name will be used*/
 [targetcluster]$ hbase restore /userid/backupdir backup_1396650096738 
 t1_dn,t2_dn,t3_dn t1_dn_restore,t2_dn_restore,t3_dn_restore
 /* restore from targetcluster back to source cluster, this is a remote restore
 [sourcecluster]$ hbase restore 
 hdfs://hostname.targetcluster.org:9000/userid/backupdir backup_1396650096738 
 t1_dn,t2_dn,t3_dn t1_dn_restore,t2_dn_restore,t3_dn_restore
 {code}
 h2. Detail layout and frame work for the next jiras
 The patch is a wrapper of the existing snapshot and exportSnapshot, and will 
 use as the base framework for the over-all solution of  
 [HBase-7912|https://issues.apache.org/jira/browse/HBASE-7912] as described 
 below:
 * *bin/hbase*  : end-user command line interface to invoke 
 BackupClient and RestoreClient
 * *BackupClient.java*  : 'main' entry for backup operations. This patch will 
 only support 'full' backup. In future jiras, will support:
 ** *create* incremental backup
 ** *cancel* an ongoing backup
 ** *delete* an exisitng backup image
 ** *describe* the detailed informaiton of backup image
 ** show *history* of all successful backups 
 ** show the *status* of the latest backup request
 ** *convert* incremental backup WAL files into HFiles.  either on-the-fly 
 during create or after create
 ** *merge* backup image
 ** *stop* backup a table of existing backup image
 ** *show* tables of a backup image 
 * *BackupCommands.java* : a place to keep all the command usages and options
 * *BackupManager.java*  : handle backup requests on server-side, create 
 BACKUP ZOOKEEPER nodes to keep track backup. The timestamps kept in zookeeper 
 will be used for future incremental backup (not included in this jira). 
 Create BackupContext and DispatchRequest. 
 * *BackupHandler.java*  : in this patch, it is a wrapper of snapshot and 
 exportsnapshot. In future jiras, 
 ** *timestamps* info will be recorded in ZK
 ** carry on *incremental* backup.  
 ** update backup *progress*
 ** set flags of *status*
 ** build up *backupManifest* file(in this jira only limited info for 
 fullback. later on, timestamps and dependency of multipl backup images are 
 also recorded here)
 ** clean up after *failed* backup 
 ** clean up after *cancelled* backup
 ** allow on-the-fly *convert* during incremental backup 
 * *BackupContext.java* : encapsulate backup information like backup ID, table 
 names, directory info, phase, TimeStamps of backup progress, size of data, 
 ancestor info, etc. 
 * *BackupCopier.java*  : the copying operation.  Later on, to support 
 progress report and mapper estimation; and extends DisCp for progress 
 updating to ZK during backup. 
 * *BackupExcpetion.java*: to handle exception from backup/restore
 * *BackupManifest.java* : encapsulate all the backup image information. The 
 manifest info will be bundled as manifest file together with data. So that 
 each backup image will contain all the info needed for restore. 
 * *BackupStatus.java*   : encapsulate backup status at table level during 
 backup progress
 * *BackupUtil.java* : utility methods during backup process
 * *RestoreClient.java*  : 'main' entry for restore operations. This patch 
 will only support 'full' backup. 
 * *RestoreUtil.java*: utility methods during restore process
 * *ExportSnapshot.java* : remove 'final' so that another class 
 SnapshotCopy.java can extends from it
 * *SnapshotCopy.java*   : 

[jira] [Commented] (HBASE-10900) FULL table backup and restore

2015-02-19 Thread Demai Ni (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327973#comment-14327973
 ] 

Demai Ni commented on HBASE-10900:
--

Due to personal reason, I can't work directly to contribute back to open source 
community at this moment. Put this jira as 'unassigned', and remove fixed 
version

hopefully, someone can pick it up. or Or my situation may change and I can 
continue to work on this. 

Thanks... Demai

 FULL table backup and restore
 -

 Key: HBASE-10900
 URL: https://issues.apache.org/jira/browse/HBASE-10900
 Project: HBase
  Issue Type: Task
Reporter: Demai Ni
 Attachments: HBASE-10900-fullbackup-trunk-v1.patch, 
 HBASE-10900-trunk-v2.patch, HBASE-10900-trunk-v3.patch, 
 HBASE-10900-trunk-v4.patch


 h2. Feature Description
 This is a subtask of 
 [HBase-7912|https://issues.apache.org/jira/browse/HBASE-7912] to support FULL 
 backup/restore, and will complete the following function:
 {code:title=Backup Restore example|borderStyle=solid}
 /* backup from sourcecluster to targetcluster 
  */
 /* if no table name specified, all tables from source cluster will be 
 backuped */
 [sourcecluster]$ hbase backup create full 
 hdfs://hostname.targetcluster.org:9000/userid/backupdir t1_dn,t2_dn,t3_dn
 /* restore on targetcluser, this is a local restore   
   */
 /* backup_1396650096738 - backup image name   
   */
 /* t1_dn,etc are the original table names. All tables will be restored if not 
 specified */
 /* t1_dn_restore, etc. are the restored table. if not specified, orginal 
 table name will be used*/
 [targetcluster]$ hbase restore /userid/backupdir backup_1396650096738 
 t1_dn,t2_dn,t3_dn t1_dn_restore,t2_dn_restore,t3_dn_restore
 /* restore from targetcluster back to source cluster, this is a remote restore
 [sourcecluster]$ hbase restore 
 hdfs://hostname.targetcluster.org:9000/userid/backupdir backup_1396650096738 
 t1_dn,t2_dn,t3_dn t1_dn_restore,t2_dn_restore,t3_dn_restore
 {code}
 h2. Detail layout and frame work for the next jiras
 The patch is a wrapper of the existing snapshot and exportSnapshot, and will 
 use as the base framework for the over-all solution of  
 [HBase-7912|https://issues.apache.org/jira/browse/HBASE-7912] as described 
 below:
 * *bin/hbase*  : end-user command line interface to invoke 
 BackupClient and RestoreClient
 * *BackupClient.java*  : 'main' entry for backup operations. This patch will 
 only support 'full' backup. In future jiras, will support:
 ** *create* incremental backup
 ** *cancel* an ongoing backup
 ** *delete* an exisitng backup image
 ** *describe* the detailed informaiton of backup image
 ** show *history* of all successful backups 
 ** show the *status* of the latest backup request
 ** *convert* incremental backup WAL files into HFiles.  either on-the-fly 
 during create or after create
 ** *merge* backup image
 ** *stop* backup a table of existing backup image
 ** *show* tables of a backup image 
 * *BackupCommands.java* : a place to keep all the command usages and options
 * *BackupManager.java*  : handle backup requests on server-side, create 
 BACKUP ZOOKEEPER nodes to keep track backup. The timestamps kept in zookeeper 
 will be used for future incremental backup (not included in this jira). 
 Create BackupContext and DispatchRequest. 
 * *BackupHandler.java*  : in this patch, it is a wrapper of snapshot and 
 exportsnapshot. In future jiras, 
 ** *timestamps* info will be recorded in ZK
 ** carry on *incremental* backup.  
 ** update backup *progress*
 ** set flags of *status*
 ** build up *backupManifest* file(in this jira only limited info for 
 fullback. later on, timestamps and dependency of multipl backup images are 
 also recorded here)
 ** clean up after *failed* backup 
 ** clean up after *cancelled* backup
 ** allow on-the-fly *convert* during incremental backup 
 * *BackupContext.java* : encapsulate backup information like backup ID, table 
 names, directory info, phase, TimeStamps of backup progress, size of data, 
 ancestor info, etc. 
 * *BackupCopier.java*  : the copying operation.  Later on, to support 
 progress report and mapper estimation; and extends DisCp for progress 
 updating to ZK during backup. 
 * *BackupExcpetion.java*: to handle exception from backup/restore
 * *BackupManifest.java* : encapsulate all the backup image information. The 
 manifest info will be bundled as manifest file together with data. So that 
 each backup image will contain all the info needed for restore. 
 * *BackupStatus.java*   : encapsulate backup status at table level during 
 backup progress
 * *BackupUtil.java* : utility methods during backup process
 * *RestoreClient.java*  : 'main' entry 

[jira] [Updated] (HBASE-11085) Incremental Backup Restore support

2015-02-19 Thread Demai Ni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Demai Ni updated HBASE-11085:
-
Fix Version/s: (was: 1.1.0)
 Assignee: (was: Demai Ni)

 Incremental Backup Restore support
 --

 Key: HBASE-11085
 URL: https://issues.apache.org/jira/browse/HBASE-11085
 Project: HBase
  Issue Type: New Feature
Reporter: Demai Ni
 Attachments: 
 HBASE-11085-trunk-v1-contains-HBASE-10900-trunk-v4.patch, 
 HBASE-11085-trunk-v1.patch, 
 HBASE-11085-trunk-v2-contain-HBASE-10900-trunk-v4.patch, 
 HBASE-11085-trunk-v2.patch, HLogPlayer.java


 h2. Feature Description
 the jira is part of  
 [HBASE-7912|https://issues.apache.org/jira/browse/HBASE-7912], and depend on 
 full backup [HBASE-10900| https://issues.apache.org/jira/browse/HBASE-10900]. 
 for the detail layout and frame work, please reference to  [HBASE-10900| 
 https://issues.apache.org/jira/browse/HBASE-10900].
 When client issues an incremental backup request, BackupManager will check 
 the request and then kicks of a global procedure via HBaseAdmin for all the 
 active regionServer to roll log. Each Region server will record their log 
 number into zookeeper. Then we determine which log need to be included in 
 this incremental backup, and use DistCp to copy them to target location. At 
 the same time, a dependency of backup image will be recorded, and later on 
 saved in Backup Manifest file.
 Restore is to replay the backuped WAL logs on target HBase instance. The 
 replay will occur after full backup.
 As incremental backup image depends on prior full backup image and 
 incremental images if exists. Manifest file will be used to store the 
 dependency lineage during backup, and used during restore time for PIT 
 restore.  
 h2. Use case(i.e  example)
 {code:title=Incremental Backup Restore example|borderStyle=solid}
 /***/
 /* STEP1:  FULL backup from sourcecluster to targetcluster  
 /* if no table name specified, all tables from source cluster will be 
 backuped 
 /***/
 [sourcecluster]$ hbase backup create full 
 hdfs://hostname.targetcluster.org:9000/userid/backupdir t1_dn,t2_dn,t3_dn
 ...
 14/05/09 13:35:46 INFO backup.BackupManager: Backup request 
 backup_1399667695966 has been executed.
 /***/
 /* STEP2:   In HBase Shell, put a few rows
 
 /***/
 hbase(main):002:0 put 't1_dn','row100','cf1:q1','value100_0509_increm1'
 hbase(main):003:0 put 't2_dn','row100','cf1:q1','value100_0509_increm1'
 hbase(main):004:0 put 't3_dn','row100','cf1:q1','value100_0509_increm1'
 /***/
 /* STEP3:   Take the 1st incremental backup   
  
 /***/
 [sourcecluster]$ hbase backup create incremental 
 hdfs://hostname.targetcluster.org:9000/userid/backupdir
 ...
 14/05/09 13:37:45 INFO backup.BackupManager: Backup request 
 backup_1399667851020 has been executed.
 /***/
 /* STEP4:   In HBase Shell, put a few more rows.  
 
 /*   update 'row100', and create new 'row101' 
   
 /***/
 hbase(main):005:0 put 't3_dn','row100','cf1:q1','value101_0509_increm2'
 hbase(main):006:0 put 't2_dn','row100','cf1:q1','value101_0509_increm2'
 hbase(main):007:0 put 't1_dn','row100','cf1:q1','value101_0509_increm2'
 hbase(main):009:0 put 't1_dn','row101','cf1:q1','value101_0509_increm2'
 hbase(main):010:0 put 't2_dn','row101','cf1:q1','value101_0509_increm2'
 hbase(main):011:0 put 't3_dn','row101','cf1:q1','value101_0509_increm2'
 /***/
 /* STEP5:   Take the 2nd incremental backup   
 
 /***/
 [sourcecluster]$ hbase backup create incremental 
 hdfs://hostname.targetcluster.org:9000/userid/backupdir
 ...
 14/05/09 13:39:33 INFO backup.BackupManager: Backup request 
 backup_1399667959165 has been executed.
 /***/
 

[jira] [Commented] (HBASE-11085) Incremental Backup Restore support

2015-02-19 Thread Demai Ni (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327975#comment-14327975
 ] 

Demai Ni commented on HBASE-11085:
--

Due to personal reason, I can't work directly to contribute back to open source 
community at this moment. Put this jira as 'unassigned', and remove fixed 
version

hopefully, someone can pick it up. Or my situation may change and then I will 
continue to work on this. 

Thanks... Demai

 Incremental Backup Restore support
 --

 Key: HBASE-11085
 URL: https://issues.apache.org/jira/browse/HBASE-11085
 Project: HBase
  Issue Type: New Feature
Reporter: Demai Ni
 Attachments: 
 HBASE-11085-trunk-v1-contains-HBASE-10900-trunk-v4.patch, 
 HBASE-11085-trunk-v1.patch, 
 HBASE-11085-trunk-v2-contain-HBASE-10900-trunk-v4.patch, 
 HBASE-11085-trunk-v2.patch, HLogPlayer.java


 h2. Feature Description
 the jira is part of  
 [HBASE-7912|https://issues.apache.org/jira/browse/HBASE-7912], and depend on 
 full backup [HBASE-10900| https://issues.apache.org/jira/browse/HBASE-10900]. 
 for the detail layout and frame work, please reference to  [HBASE-10900| 
 https://issues.apache.org/jira/browse/HBASE-10900].
 When client issues an incremental backup request, BackupManager will check 
 the request and then kicks of a global procedure via HBaseAdmin for all the 
 active regionServer to roll log. Each Region server will record their log 
 number into zookeeper. Then we determine which log need to be included in 
 this incremental backup, and use DistCp to copy them to target location. At 
 the same time, a dependency of backup image will be recorded, and later on 
 saved in Backup Manifest file.
 Restore is to replay the backuped WAL logs on target HBase instance. The 
 replay will occur after full backup.
 As incremental backup image depends on prior full backup image and 
 incremental images if exists. Manifest file will be used to store the 
 dependency lineage during backup, and used during restore time for PIT 
 restore.  
 h2. Use case(i.e  example)
 {code:title=Incremental Backup Restore example|borderStyle=solid}
 /***/
 /* STEP1:  FULL backup from sourcecluster to targetcluster  
 /* if no table name specified, all tables from source cluster will be 
 backuped 
 /***/
 [sourcecluster]$ hbase backup create full 
 hdfs://hostname.targetcluster.org:9000/userid/backupdir t1_dn,t2_dn,t3_dn
 ...
 14/05/09 13:35:46 INFO backup.BackupManager: Backup request 
 backup_1399667695966 has been executed.
 /***/
 /* STEP2:   In HBase Shell, put a few rows
 
 /***/
 hbase(main):002:0 put 't1_dn','row100','cf1:q1','value100_0509_increm1'
 hbase(main):003:0 put 't2_dn','row100','cf1:q1','value100_0509_increm1'
 hbase(main):004:0 put 't3_dn','row100','cf1:q1','value100_0509_increm1'
 /***/
 /* STEP3:   Take the 1st incremental backup   
  
 /***/
 [sourcecluster]$ hbase backup create incremental 
 hdfs://hostname.targetcluster.org:9000/userid/backupdir
 ...
 14/05/09 13:37:45 INFO backup.BackupManager: Backup request 
 backup_1399667851020 has been executed.
 /***/
 /* STEP4:   In HBase Shell, put a few more rows.  
 
 /*   update 'row100', and create new 'row101' 
   
 /***/
 hbase(main):005:0 put 't3_dn','row100','cf1:q1','value101_0509_increm2'
 hbase(main):006:0 put 't2_dn','row100','cf1:q1','value101_0509_increm2'
 hbase(main):007:0 put 't1_dn','row100','cf1:q1','value101_0509_increm2'
 hbase(main):009:0 put 't1_dn','row101','cf1:q1','value101_0509_increm2'
 hbase(main):010:0 put 't2_dn','row101','cf1:q1','value101_0509_increm2'
 hbase(main):011:0 put 't3_dn','row101','cf1:q1','value101_0509_increm2'
 /***/
 /* STEP5:   Take the 2nd incremental backup   
 
 /***/
 [sourcecluster]$ hbase backup create 

[jira] [Updated] (HBASE-13075) TableInputFormatBase spuriously warning about multiple initializeTable calls

2015-02-19 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13075:

Status: Patch Available  (was: Open)

 TableInputFormatBase spuriously warning about multiple initializeTable calls
 

 Key: HBASE-13075
 URL: https://issues.apache.org/jira/browse/HBASE-13075
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 1.0.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Minor
 Fix For: 1.0.1, 1.1.0, 2.2.0

 Attachments: HBASE-13075.1.patch.txt


 TableInputFormatBase incorrectly checks a local variable (that can't be null) 
 rather than the instance variable (which can be null) to see if it has been 
 called multiple times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13075) TableInputFormatBase spuriously warning about multiple initializeTable calls

2015-02-19 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13075:

Attachment: HBASE-13075.1.patch.txt

manually tested by running the TestTableInputFormat classes and looking at the 
log output.

 TableInputFormatBase spuriously warning about multiple initializeTable calls
 

 Key: HBASE-13075
 URL: https://issues.apache.org/jira/browse/HBASE-13075
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 1.0.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Minor
 Fix For: 1.0.1, 1.1.0, 2.2.0

 Attachments: HBASE-13075.1.patch.txt


 TableInputFormatBase incorrectly checks a local variable (that can't be null) 
 rather than the instance variable (which can be null) to see if it has been 
 called multiple times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13075) TableInputFormatBase spuriously warning about multiple initializeTable calls

2015-02-19 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-13075:
---

 Summary: TableInputFormatBase spuriously warning about multiple 
initializeTable calls
 Key: HBASE-13075
 URL: https://issues.apache.org/jira/browse/HBASE-13075
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 1.0.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Minor
 Fix For: 2.2.0, 1.0.1, 1.1.0


TableInputFormatBase incorrectly checks a local variable (that can't be null) 
rather than the instance variable (which can be null) to see if it has been 
called multiple times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12706) Support multiple port numbers in ZK quorum string

2015-02-19 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-12706:
---
Attachment: (was: HBASE-12706.v1-master.patch)

 Support multiple port numbers in ZK quorum string
 -

 Key: HBASE-12706
 URL: https://issues.apache.org/jira/browse/HBASE-12706
 Project: HBase
  Issue Type: Improvement
Affects Versions: 1.0.0, 2.0.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
Priority: Critical
 Fix For: 2.0.0, 1.1.0


 HBase does not allow the zk quorum string to contain port numbers in this 
 format:
 {noformat}
 hostname1:port1,hostname2:port2,hostname3:port3
 {noformat}
 Instead it expects the string to be in this format:
 {noformat}
 hostname1,hostname2,hostname3:port3
 {noformat}
 And port 3 is used for all the client ports. We should flex the parsing so 
 that both forms are accepted.
 A sample exception:
 {code}
 java.io.IOException: Cluster key passed 
 host1:2181,host2:2181,host3:2181,host4:2181,host5:2181:2181:/hbase is 
 invalid, the format should 
 be:hbase.zookeeper.quorum:hbase.zookeeper.client.port:zookeeper.znode.parent
   at 
 org.apache.hadoop.hbase.zookeeper.ZKUtil.transformClusterKey(ZKUtil.java:403)
   at 
 org.apache.hadoop.hbase.zookeeper.ZKUtil.applyClusterKeyToConf(ZKUtil.java:386)
   at 
 org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.getPeerConf(ReplicationPeersZKImpl.java:304)
   at 
 org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.createPeer(ReplicationPeersZKImpl.java:435)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12706) Support multiple port numbers in ZK quorum string

2015-02-19 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-12706:
---
Attachment: HBASE-12706.v1-master.patch

 Support multiple port numbers in ZK quorum string
 -

 Key: HBASE-12706
 URL: https://issues.apache.org/jira/browse/HBASE-12706
 Project: HBase
  Issue Type: Improvement
Affects Versions: 1.0.0, 2.0.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
Priority: Critical
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-12706.v1-master.patch


 HBase does not allow the zk quorum string to contain port numbers in this 
 format:
 {noformat}
 hostname1:port1,hostname2:port2,hostname3:port3
 {noformat}
 Instead it expects the string to be in this format:
 {noformat}
 hostname1,hostname2,hostname3:port3
 {noformat}
 And port 3 is used for all the client ports. We should flex the parsing so 
 that both forms are accepted.
 A sample exception:
 {code}
 java.io.IOException: Cluster key passed 
 host1:2181,host2:2181,host3:2181,host4:2181,host5:2181:2181:/hbase is 
 invalid, the format should 
 be:hbase.zookeeper.quorum:hbase.zookeeper.client.port:zookeeper.znode.parent
   at 
 org.apache.hadoop.hbase.zookeeper.ZKUtil.transformClusterKey(ZKUtil.java:403)
   at 
 org.apache.hadoop.hbase.zookeeper.ZKUtil.applyClusterKeyToConf(ZKUtil.java:386)
   at 
 org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.getPeerConf(ReplicationPeersZKImpl.java:304)
   at 
 org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.createPeer(ReplicationPeersZKImpl.java:435)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12953) RegionServer is not functionally working with AysncRpcClient in secure mode

2015-02-19 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328129#comment-14328129
 ] 

stack commented on HBASE-12953:
---

[~octo47] Are the TestMasterObserver's related at all?  Let me rerun.

 RegionServer is not functionally working with AysncRpcClient in secure mode
 ---

 Key: HBASE-12953
 URL: https://issues.apache.org/jira/browse/HBASE-12953
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0, 1.1.0
Reporter: Ashish Singhi
Assignee: stack
Priority: Critical
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-12953.patch, HBASE-12953_1.patch, 
 HBASE-12953_2.patch, HBASE-12953_2.patch, HBASE-12953_2.patch, 
 HBASE-12953_2.patch, HBASE-12953_2.patch, HBASE-12953_3.patch, 
 HBASE-12953_3.patch, HBASE-12953_3.patch, testcase.patch


 HBase version 2.0.0
 Default value for {{hbase.rpc.client.impl}} is set to AsyncRpcClient.
 When trying to install HBase with Kerberos, RegionServer is not working 
 functionally.
 The following log is logged in its log file
 {noformat}
 2015-02-02 14:59:05,407 WARN  [AsyncRpcChannel-pool1-t1] 
 channel.DefaultChannelPipeline: An exceptionCaught() event was fired, and it 
 reached at the tail of the pipeline. It usually means the last handler in the 
 pipeline did not handle the exception.
 io.netty.channel.ChannelPipelineException: 
 org.apache.hadoop.hbase.security.SaslClientHandler.handlerAdded() has thrown 
 an exception; removed.
   at 
 io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:499)
   at 
 io.netty.channel.DefaultChannelPipeline.callHandlerAdded(DefaultChannelPipeline.java:481)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst0(DefaultChannelPipeline.java:114)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:97)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:235)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:214)
   at 
 org.apache.hadoop.hbase.ipc.AsyncRpcChannel$2.operationComplete(AsyncRpcChannel.java:194)
   at 
 org.apache.hadoop.hbase.ipc.AsyncRpcChannel$2.operationComplete(AsyncRpcChannel.java:157)
   at 
 io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
   at 
 io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
   at 
 io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
   at 
 io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
   at 
 io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
   at 
 io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:253)
   at 
 io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:288)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
   at 
 io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused by 
 GSSException: No valid credentials provided (Mechanism level: Failed to find 
 any Kerberos tgt)]
   at 
 com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
   at 
 org.apache.hadoop.hbase.security.SaslClientHandler.handlerAdded(SaslClientHandler.java:154)
   at 
 io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:486)
   ... 20 more
 Caused by: GSSException: No valid credentials provided (Mechanism level: 
 Failed to find any Kerberos tgt)
   at 
 sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
   at 
 sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
   at 
 sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
   at 
 sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
   at 
 sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
   at 
 sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
   at 
 com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)
 {noformat}
 When set 

[jira] [Updated] (HBASE-12953) RegionServer is not functionally working with AysncRpcClient in secure mode

2015-02-19 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12953:
--
Attachment: HBASE-12953_3 (2).patch

Retry

 RegionServer is not functionally working with AysncRpcClient in secure mode
 ---

 Key: HBASE-12953
 URL: https://issues.apache.org/jira/browse/HBASE-12953
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0, 1.1.0
Reporter: Ashish Singhi
Assignee: stack
Priority: Critical
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-12953.patch, HBASE-12953_1.patch, 
 HBASE-12953_2.patch, HBASE-12953_2.patch, HBASE-12953_2.patch, 
 HBASE-12953_2.patch, HBASE-12953_2.patch, HBASE-12953_3 (2).patch, 
 HBASE-12953_3.patch, HBASE-12953_3.patch, HBASE-12953_3.patch, testcase.patch


 HBase version 2.0.0
 Default value for {{hbase.rpc.client.impl}} is set to AsyncRpcClient.
 When trying to install HBase with Kerberos, RegionServer is not working 
 functionally.
 The following log is logged in its log file
 {noformat}
 2015-02-02 14:59:05,407 WARN  [AsyncRpcChannel-pool1-t1] 
 channel.DefaultChannelPipeline: An exceptionCaught() event was fired, and it 
 reached at the tail of the pipeline. It usually means the last handler in the 
 pipeline did not handle the exception.
 io.netty.channel.ChannelPipelineException: 
 org.apache.hadoop.hbase.security.SaslClientHandler.handlerAdded() has thrown 
 an exception; removed.
   at 
 io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:499)
   at 
 io.netty.channel.DefaultChannelPipeline.callHandlerAdded(DefaultChannelPipeline.java:481)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst0(DefaultChannelPipeline.java:114)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:97)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:235)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:214)
   at 
 org.apache.hadoop.hbase.ipc.AsyncRpcChannel$2.operationComplete(AsyncRpcChannel.java:194)
   at 
 org.apache.hadoop.hbase.ipc.AsyncRpcChannel$2.operationComplete(AsyncRpcChannel.java:157)
   at 
 io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
   at 
 io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
   at 
 io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
   at 
 io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
   at 
 io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
   at 
 io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:253)
   at 
 io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:288)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
   at 
 io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused by 
 GSSException: No valid credentials provided (Mechanism level: Failed to find 
 any Kerberos tgt)]
   at 
 com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
   at 
 org.apache.hadoop.hbase.security.SaslClientHandler.handlerAdded(SaslClientHandler.java:154)
   at 
 io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:486)
   ... 20 more
 Caused by: GSSException: No valid credentials provided (Mechanism level: 
 Failed to find any Kerberos tgt)
   at 
 sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
   at 
 sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
   at 
 sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
   at 
 sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
   at 
 sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
   at 
 sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
   at 
 com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)
 {noformat}
 When set hbase.rpc.client.impl to RpcClientImpl, there seems to be no issue.



--

[jira] [Comment Edited] (HBASE-12953) RegionServer is not functionally working with AysncRpcClient in secure mode

2015-02-19 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328129#comment-14328129
 ] 

stack edited comment on HBASE-12953 at 2/19/15 9:16 PM:


[~Apache9] Are the TestMasterObserver's related at all?  Let me rerun.


was (Author: stack):
[~octo47] Are the TestMasterObserver's related at all?  Let me rerun.

 RegionServer is not functionally working with AysncRpcClient in secure mode
 ---

 Key: HBASE-12953
 URL: https://issues.apache.org/jira/browse/HBASE-12953
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0, 1.1.0
Reporter: Ashish Singhi
Assignee: zhangduo
Priority: Critical
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-12953.patch, HBASE-12953_1.patch, 
 HBASE-12953_2.patch, HBASE-12953_2.patch, HBASE-12953_2.patch, 
 HBASE-12953_2.patch, HBASE-12953_2.patch, HBASE-12953_3 (2).patch, 
 HBASE-12953_3.patch, HBASE-12953_3.patch, HBASE-12953_3.patch, testcase.patch


 HBase version 2.0.0
 Default value for {{hbase.rpc.client.impl}} is set to AsyncRpcClient.
 When trying to install HBase with Kerberos, RegionServer is not working 
 functionally.
 The following log is logged in its log file
 {noformat}
 2015-02-02 14:59:05,407 WARN  [AsyncRpcChannel-pool1-t1] 
 channel.DefaultChannelPipeline: An exceptionCaught() event was fired, and it 
 reached at the tail of the pipeline. It usually means the last handler in the 
 pipeline did not handle the exception.
 io.netty.channel.ChannelPipelineException: 
 org.apache.hadoop.hbase.security.SaslClientHandler.handlerAdded() has thrown 
 an exception; removed.
   at 
 io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:499)
   at 
 io.netty.channel.DefaultChannelPipeline.callHandlerAdded(DefaultChannelPipeline.java:481)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst0(DefaultChannelPipeline.java:114)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:97)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:235)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:214)
   at 
 org.apache.hadoop.hbase.ipc.AsyncRpcChannel$2.operationComplete(AsyncRpcChannel.java:194)
   at 
 org.apache.hadoop.hbase.ipc.AsyncRpcChannel$2.operationComplete(AsyncRpcChannel.java:157)
   at 
 io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
   at 
 io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
   at 
 io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
   at 
 io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
   at 
 io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
   at 
 io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:253)
   at 
 io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:288)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
   at 
 io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused by 
 GSSException: No valid credentials provided (Mechanism level: Failed to find 
 any Kerberos tgt)]
   at 
 com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
   at 
 org.apache.hadoop.hbase.security.SaslClientHandler.handlerAdded(SaslClientHandler.java:154)
   at 
 io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:486)
   ... 20 more
 Caused by: GSSException: No valid credentials provided (Mechanism level: 
 Failed to find any Kerberos tgt)
   at 
 sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
   at 
 sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
   at 
 sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
   at 
 sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
   at 
 sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
   at 
 

[jira] [Commented] (HBASE-13054) Provide more tracing information for locking/latching events.

2015-02-19 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328139#comment-14328139
 ] 

Elliott Clark commented on HBASE-13054:
---

{code}Trace.startSpan(MemStoreScanner).close();{code}

This should be an annotation.

{code}traceScope.getSpan().addTimelineAnnotation(Waiting for row lock);{code}

No need to add that annotation since that's the purpose of the span.

{code}if (traceScope != null) 
traceScope.getSpan().addTimelineAnnotation(Acquired row lock);{code}
I'd rather just annotate the failure case since that will be the more un-usual 
case.


 Provide more tracing information for locking/latching events.
 -

 Key: HBASE-13054
 URL: https://issues.apache.org/jira/browse/HBASE-13054
 Project: HBase
  Issue Type: Improvement
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-13054.patch, HBASE-13054_v2.patch


 Currently not much tracing information available for locking and latching 
 events like row level locking during do mini batch mutations, region level 
 locking during flush, close and so on. It will be better to add the trace 
 information for such events so that it will be useful for finding time spent 
 on locking and waiting time on locks while analyzing performance issues in 
 queries using trace information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12953) RegionServer is not functionally working with AysncRpcClient in secure mode

2015-02-19 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12953:
--
Assignee: zhangduo  (was: stack)

 RegionServer is not functionally working with AysncRpcClient in secure mode
 ---

 Key: HBASE-12953
 URL: https://issues.apache.org/jira/browse/HBASE-12953
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0, 1.1.0
Reporter: Ashish Singhi
Assignee: zhangduo
Priority: Critical
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-12953.patch, HBASE-12953_1.patch, 
 HBASE-12953_2.patch, HBASE-12953_2.patch, HBASE-12953_2.patch, 
 HBASE-12953_2.patch, HBASE-12953_2.patch, HBASE-12953_3 (2).patch, 
 HBASE-12953_3.patch, HBASE-12953_3.patch, HBASE-12953_3.patch, testcase.patch


 HBase version 2.0.0
 Default value for {{hbase.rpc.client.impl}} is set to AsyncRpcClient.
 When trying to install HBase with Kerberos, RegionServer is not working 
 functionally.
 The following log is logged in its log file
 {noformat}
 2015-02-02 14:59:05,407 WARN  [AsyncRpcChannel-pool1-t1] 
 channel.DefaultChannelPipeline: An exceptionCaught() event was fired, and it 
 reached at the tail of the pipeline. It usually means the last handler in the 
 pipeline did not handle the exception.
 io.netty.channel.ChannelPipelineException: 
 org.apache.hadoop.hbase.security.SaslClientHandler.handlerAdded() has thrown 
 an exception; removed.
   at 
 io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:499)
   at 
 io.netty.channel.DefaultChannelPipeline.callHandlerAdded(DefaultChannelPipeline.java:481)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst0(DefaultChannelPipeline.java:114)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:97)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:235)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:214)
   at 
 org.apache.hadoop.hbase.ipc.AsyncRpcChannel$2.operationComplete(AsyncRpcChannel.java:194)
   at 
 org.apache.hadoop.hbase.ipc.AsyncRpcChannel$2.operationComplete(AsyncRpcChannel.java:157)
   at 
 io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
   at 
 io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
   at 
 io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
   at 
 io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
   at 
 io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
   at 
 io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:253)
   at 
 io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:288)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
   at 
 io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused by 
 GSSException: No valid credentials provided (Mechanism level: Failed to find 
 any Kerberos tgt)]
   at 
 com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
   at 
 org.apache.hadoop.hbase.security.SaslClientHandler.handlerAdded(SaslClientHandler.java:154)
   at 
 io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:486)
   ... 20 more
 Caused by: GSSException: No valid credentials provided (Mechanism level: 
 Failed to find any Kerberos tgt)
   at 
 sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
   at 
 sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
   at 
 sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
   at 
 sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
   at 
 sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
   at 
 sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
   at 
 com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)
 {noformat}
 When set hbase.rpc.client.impl to RpcClientImpl, there seems to be no issue.



--
This 

[jira] [Commented] (HBASE-13056) Refactor table.jsp code to remove repeated code and make it easier to add new checks

2015-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327189#comment-14327189
 ] 

Hadoop QA commented on HBASE-13056:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12699640/HBASE-13056-0.98.patch
  against 0.98 branch at commit 31f17b17f0e2d12550b97098ec45ab59c5d98d58.
  ATTACHMENT ID: 12699640

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12908//console

This message is automatically generated.

 Refactor table.jsp code to remove repeated code and make it easier to add new 
 checks
 

 Key: HBASE-13056
 URL: https://issues.apache.org/jira/browse/HBASE-13056
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Vikas Vishwakarma
Assignee: Vikas Vishwakarma
 Fix For: 2.0.0

 Attachments: HBASE-13056-0.98.patch, HBASE-13056.patch


 While trying to fix HBASE-13001, I realized that there is lot of html code 
 repetition in table.jsp which is making addition of new checks slightly 
 difficult in the sense I will have to:
 1. Add the check at multiple places in the code
 Or 
 2. Repeat the html code again for the new check 
 So I am proposing to re-factor table.jsp code such that the common html 
 header/body is loaded without any condition check and then we generate the 
 condition specific html code 
 snapshot.jsp follows the same format as explained below:
 {noformat}
 Current implementation:
 
 if( x ) {
   title_x
   common_html_header
   common_html_body
   x_specific_html_body
 } else {
   title_y
   common_html_header
   common_html_body
   y_specific_html_body
 }
 New Implementation:
 ==
 if( x ) {
   title_x
 } else {
   title_y
 }
 common_html_header
 common_html_body
 if( x ) {
   x_specific_html_body
 } else {
   y_specific_html_body
 }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13069) Thrift Http Server returning an error code of 500 instead of 401 when authentication fails.

2015-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327209#comment-14327209
 ] 

Hadoop QA commented on HBASE-13069:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699624/HBASE-13069.patch
  against master branch at commit 31f17b17f0e2d12550b97098ec45ab59c5d98d58.
  ATTACHMENT ID: 12699624

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.
{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12907//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12907//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12907//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12907//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12907//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12907//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12907//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12907//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12907//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12907//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12907//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12907//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12907//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12907//console

This message is automatically generated.

 Thrift Http Server returning an error code of 500 instead of 401 when 
 authentication fails.
 ---

 Key: HBASE-13069
 URL: https://issues.apache.org/jira/browse/HBASE-13069
 Project: HBase
  Issue Type: Bug
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Attachments: HBASE-13069.patch, HBASE-13069.patch


 As per description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13056) Refactor table.jsp code to remove repeated code and make it easier to add new checks

2015-02-19 Thread Vikas Vishwakarma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Vishwakarma updated HBASE-13056:
--
Attachment: (was: HBASE-13056-0.98.patch)

 Refactor table.jsp code to remove repeated code and make it easier to add new 
 checks
 

 Key: HBASE-13056
 URL: https://issues.apache.org/jira/browse/HBASE-13056
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Vikas Vishwakarma
Assignee: Vikas Vishwakarma
 Fix For: 2.0.0

 Attachments: HBASE-13056.patch


 While trying to fix HBASE-13001, I realized that there is lot of html code 
 repetition in table.jsp which is making addition of new checks slightly 
 difficult in the sense I will have to:
 1. Add the check at multiple places in the code
 Or 
 2. Repeat the html code again for the new check 
 So I am proposing to re-factor table.jsp code such that the common html 
 header/body is loaded without any condition check and then we generate the 
 condition specific html code 
 snapshot.jsp follows the same format as explained below:
 {noformat}
 Current implementation:
 
 if( x ) {
   title_x
   common_html_header
   common_html_body
   x_specific_html_body
 } else {
   title_y
   common_html_header
   common_html_body
   y_specific_html_body
 }
 New Implementation:
 ==
 if( x ) {
   title_x
 } else {
   title_y
 }
 common_html_header
 common_html_body
 if( x ) {
   x_specific_html_body
 } else {
   y_specific_html_body
 }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13056) Refactor table.jsp code to remove repeated code and make it easier to add new checks

2015-02-19 Thread Vikas Vishwakarma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Vishwakarma updated HBASE-13056:
--
Attachment: HBASE-13056-0.98.patch

 Refactor table.jsp code to remove repeated code and make it easier to add new 
 checks
 

 Key: HBASE-13056
 URL: https://issues.apache.org/jira/browse/HBASE-13056
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Vikas Vishwakarma
Assignee: Vikas Vishwakarma
 Fix For: 2.0.0

 Attachments: HBASE-13056-0.98.patch, HBASE-13056.patch


 While trying to fix HBASE-13001, I realized that there is lot of html code 
 repetition in table.jsp which is making addition of new checks slightly 
 difficult in the sense I will have to:
 1. Add the check at multiple places in the code
 Or 
 2. Repeat the html code again for the new check 
 So I am proposing to re-factor table.jsp code such that the common html 
 header/body is loaded without any condition check and then we generate the 
 condition specific html code 
 snapshot.jsp follows the same format as explained below:
 {noformat}
 Current implementation:
 
 if( x ) {
   title_x
   common_html_header
   common_html_body
   x_specific_html_body
 } else {
   title_y
   common_html_header
   common_html_body
   y_specific_html_body
 }
 New Implementation:
 ==
 if( x ) {
   title_x
 } else {
   title_y
 }
 common_html_header
 common_html_body
 if( x ) {
   x_specific_html_body
 } else {
   y_specific_html_body
 }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13056) Refactor table.jsp code to remove repeated code and make it easier to add new checks

2015-02-19 Thread Vikas Vishwakarma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Vishwakarma updated HBASE-13056:
--
Attachment: HBASE-13056-0.98.patch

 Refactor table.jsp code to remove repeated code and make it easier to add new 
 checks
 

 Key: HBASE-13056
 URL: https://issues.apache.org/jira/browse/HBASE-13056
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Vikas Vishwakarma
Assignee: Vikas Vishwakarma
 Fix For: 2.0.0

 Attachments: HBASE-13056-0.98.patch, HBASE-13056.patch


 While trying to fix HBASE-13001, I realized that there is lot of html code 
 repetition in table.jsp which is making addition of new checks slightly 
 difficult in the sense I will have to:
 1. Add the check at multiple places in the code
 Or 
 2. Repeat the html code again for the new check 
 So I am proposing to re-factor table.jsp code such that the common html 
 header/body is loaded without any condition check and then we generate the 
 condition specific html code 
 snapshot.jsp follows the same format as explained below:
 {noformat}
 Current implementation:
 
 if( x ) {
   title_x
   common_html_header
   common_html_body
   x_specific_html_body
 } else {
   title_y
   common_html_header
   common_html_body
   y_specific_html_body
 }
 New Implementation:
 ==
 if( x ) {
   title_x
 } else {
   title_y
 }
 common_html_header
 common_html_body
 if( x ) {
   x_specific_html_body
 } else {
   y_specific_html_body
 }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13056) Refactor table.jsp code to remove repeated code and make it easier to add new checks

2015-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327266#comment-14327266
 ] 

Hadoop QA commented on HBASE-13056:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12699646/HBASE-13056-0.98.patch
  against 0.98 branch at commit 31f17b17f0e2d12550b97098ec45ab59c5d98d58.
  ATTACHMENT ID: 12699646

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12909//console

This message is automatically generated.

 Refactor table.jsp code to remove repeated code and make it easier to add new 
 checks
 

 Key: HBASE-13056
 URL: https://issues.apache.org/jira/browse/HBASE-13056
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Vikas Vishwakarma
Assignee: Vikas Vishwakarma
 Fix For: 2.0.0

 Attachments: HBASE-13056-0.98.patch, HBASE-13056.patch


 While trying to fix HBASE-13001, I realized that there is lot of html code 
 repetition in table.jsp which is making addition of new checks slightly 
 difficult in the sense I will have to:
 1. Add the check at multiple places in the code
 Or 
 2. Repeat the html code again for the new check 
 So I am proposing to re-factor table.jsp code such that the common html 
 header/body is loaded without any condition check and then we generate the 
 condition specific html code 
 snapshot.jsp follows the same format as explained below:
 {noformat}
 Current implementation:
 
 if( x ) {
   title_x
   common_html_header
   common_html_body
   x_specific_html_body
 } else {
   title_y
   common_html_header
   common_html_body
   y_specific_html_body
 }
 New Implementation:
 ==
 if( x ) {
   title_x
 } else {
   title_y
 }
 common_html_header
 common_html_body
 if( x ) {
   x_specific_html_body
 } else {
   y_specific_html_body
 }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13056) Refactor table.jsp code to remove repeated code and make it easier to add new checks

2015-02-19 Thread Vikas Vishwakarma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Vishwakarma updated HBASE-13056:
--
Attachment: (was: HBASE-13056-0.98.patch)

 Refactor table.jsp code to remove repeated code and make it easier to add new 
 checks
 

 Key: HBASE-13056
 URL: https://issues.apache.org/jira/browse/HBASE-13056
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Vikas Vishwakarma
Assignee: Vikas Vishwakarma
 Fix For: 2.0.0

 Attachments: HBASE-13056.patch


 While trying to fix HBASE-13001, I realized that there is lot of html code 
 repetition in table.jsp which is making addition of new checks slightly 
 difficult in the sense I will have to:
 1. Add the check at multiple places in the code
 Or 
 2. Repeat the html code again for the new check 
 So I am proposing to re-factor table.jsp code such that the common html 
 header/body is loaded without any condition check and then we generate the 
 condition specific html code 
 snapshot.jsp follows the same format as explained below:
 {noformat}
 Current implementation:
 
 if( x ) {
   title_x
   common_html_header
   common_html_body
   x_specific_html_body
 } else {
   title_y
   common_html_header
   common_html_body
   y_specific_html_body
 }
 New Implementation:
 ==
 if( x ) {
   title_x
 } else {
   title_y
 }
 common_html_header
 common_html_body
 if( x ) {
   x_specific_html_body
 } else {
   y_specific_html_body
 }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13072) BucketCache.evictBlock returns true if block not exists

2015-02-19 Thread zhangduo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangduo updated HBASE-13072:
-
Summary: BucketCache.evictBlock returns true if block not exists  (was: 
BucketCache.evictBlock always returns true)

 BucketCache.evictBlock returns true if block not exists
 ---

 Key: HBASE-13072
 URL: https://issues.apache.org/jira/browse/HBASE-13072
 Project: HBase
  Issue Type: Bug
  Components: BlockCache
Affects Versions: 1.0.0, 2.0.0, 0.98.10, 1.1.0
Reporter: zhangduo
Assignee: zhangduo
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11


 The comment of BlockCache.evictBlock says 'true if block existed and was 
 evicted, false if not' but BucketCache does not follow it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13058) Hbase shell command 'scan' for non existent table shows unnecessary info for one unrelated existent table.

2015-02-19 Thread Abhishek Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327410#comment-14327410
 ] 

Abhishek Kumar commented on HBASE-13058:


thanks Andrew for your input/comments :), i am thinking of modifying shell 
message in commands.rb file as follows:  

 Hbase shell command 'scan' for non existent table shows unnecessary info for 
 one unrelated existent table.
 --

 Key: HBASE-13058
 URL: https://issues.apache.org/jira/browse/HBASE-13058
 Project: HBase
  Issue Type: Bug
  Components: Client
Reporter: Abhishek Kumar
Priority: Trivial
 Attachments: 0001-HBASE-13058-Error-messages-in-scan-table.patch


 When scanning for a non existent table in hbase shell, error message in shell 
  sometimes(based on META table content) displays completely unrelated table 
 info , which seems to be unnecessary and inconsistent with other error 
 messages:
 {noformat}
 hbase(main):016:0 scan 'noTable'
 ROW  COLUMN+CELL
 ERROR: Unknown table Table 'noTable' was not found, got: hbase:namespace.!
 -
 hbase(main):017:0 scan '01_noTable'
 ROW  COLUMN+CELL
 ERROR: Unknown table 01_noTable!
 --
 {noformat}
 Its happening when doing a META table scan (to locate input table ) and 
 scanner stops at row of another table (beyond which table can not exist) in 
 ConnectionManager.locateRegionInMeta:
 {noformat}
 private RegionLocations locateRegionInMeta(TableName tableName, byte[] row,
boolean useCache, boolean retry, int replicaId) throws 
 IOException {
 .
 
 // possible we got a region of a different table...
   if (!regionInfo.getTable().equals(tableName)) {
 throw new TableNotFoundException(
   Table ' + tableName + ' was not found, got:  +
   regionInfo.getTable() + .);
   }
 ...
 ...
 {noformat}
 Here, we can simply put a debug message(if required) and just throw the 
 TableNotFoundException(tableName)  with only tableName instead of with 
 scanner positioned row.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13056) Refactor table.jsp code to remove repeated code and make it easier to add new checks

2015-02-19 Thread Vikas Vishwakarma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Vishwakarma updated HBASE-13056:
--
Attachment: HBASE-13056-0.98.patch

 Refactor table.jsp code to remove repeated code and make it easier to add new 
 checks
 

 Key: HBASE-13056
 URL: https://issues.apache.org/jira/browse/HBASE-13056
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Vikas Vishwakarma
Assignee: Vikas Vishwakarma
 Fix For: 2.0.0

 Attachments: HBASE-13056-0.98.patch, HBASE-13056.patch


 While trying to fix HBASE-13001, I realized that there is lot of html code 
 repetition in table.jsp which is making addition of new checks slightly 
 difficult in the sense I will have to:
 1. Add the check at multiple places in the code
 Or 
 2. Repeat the html code again for the new check 
 So I am proposing to re-factor table.jsp code such that the common html 
 header/body is loaded without any condition check and then we generate the 
 condition specific html code 
 snapshot.jsp follows the same format as explained below:
 {noformat}
 Current implementation:
 
 if( x ) {
   title_x
   common_html_header
   common_html_body
   x_specific_html_body
 } else {
   title_y
   common_html_header
   common_html_body
   y_specific_html_body
 }
 New Implementation:
 ==
 if( x ) {
   title_x
 } else {
   title_y
 }
 common_html_header
 common_html_body
 if( x ) {
   x_specific_html_body
 } else {
   y_specific_html_body
 }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13056) Refactor table.jsp code to remove repeated code and make it easier to add new checks

2015-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327338#comment-14327338
 ] 

Hadoop QA commented on HBASE-13056:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12699653/HBASE-13056-0.98.patch
  against 0.98 branch at commit 31f17b17f0e2d12550b97098ec45ab59c5d98d58.
  ATTACHMENT ID: 12699653

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12910//console

This message is automatically generated.

 Refactor table.jsp code to remove repeated code and make it easier to add new 
 checks
 

 Key: HBASE-13056
 URL: https://issues.apache.org/jira/browse/HBASE-13056
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Vikas Vishwakarma
Assignee: Vikas Vishwakarma
 Fix For: 2.0.0

 Attachments: HBASE-13056-0.98.patch, HBASE-13056.patch


 While trying to fix HBASE-13001, I realized that there is lot of html code 
 repetition in table.jsp which is making addition of new checks slightly 
 difficult in the sense I will have to:
 1. Add the check at multiple places in the code
 Or 
 2. Repeat the html code again for the new check 
 So I am proposing to re-factor table.jsp code such that the common html 
 header/body is loaded without any condition check and then we generate the 
 condition specific html code 
 snapshot.jsp follows the same format as explained below:
 {noformat}
 Current implementation:
 
 if( x ) {
   title_x
   common_html_header
   common_html_body
   x_specific_html_body
 } else {
   title_y
   common_html_header
   common_html_body
   y_specific_html_body
 }
 New Implementation:
 ==
 if( x ) {
   title_x
 } else {
   title_y
 }
 common_html_header
 common_html_body
 if( x ) {
   x_specific_html_body
 } else {
   y_specific_html_body
 }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13056) Refactor table.jsp code to remove repeated code and make it easier to add new checks

2015-02-19 Thread Vikas Vishwakarma (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327349#comment-14327349
 ] 

Vikas Vishwakarma commented on HBASE-13056:
---

this is strange, [~stack] yesterday the patch worked on 0.98 in pre-commit 
above. Today I tried multiple times doing a fresh checkout and merging the 
changes with latest 0.98 branch but the patch is failing continuously. Any idea 
on what else I could try to resolve this ?

 Refactor table.jsp code to remove repeated code and make it easier to add new 
 checks
 

 Key: HBASE-13056
 URL: https://issues.apache.org/jira/browse/HBASE-13056
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Vikas Vishwakarma
Assignee: Vikas Vishwakarma
 Fix For: 2.0.0

 Attachments: HBASE-13056-0.98.patch, HBASE-13056.patch


 While trying to fix HBASE-13001, I realized that there is lot of html code 
 repetition in table.jsp which is making addition of new checks slightly 
 difficult in the sense I will have to:
 1. Add the check at multiple places in the code
 Or 
 2. Repeat the html code again for the new check 
 So I am proposing to re-factor table.jsp code such that the common html 
 header/body is loaded without any condition check and then we generate the 
 condition specific html code 
 snapshot.jsp follows the same format as explained below:
 {noformat}
 Current implementation:
 
 if( x ) {
   title_x
   common_html_header
   common_html_body
   x_specific_html_body
 } else {
   title_y
   common_html_header
   common_html_body
   y_specific_html_body
 }
 New Implementation:
 ==
 if( x ) {
   title_x
 } else {
   title_y
 }
 common_html_header
 common_html_body
 if( x ) {
   x_specific_html_body
 } else {
   y_specific_html_body
 }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13071) Hbase Streaming Scan Feature

2015-02-19 Thread Eshcar Hillel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eshcar Hillel updated HBASE-13071:
--
Attachment: HBaseStreamingScanDesign.pdf

Design Document

 Hbase Streaming Scan Feature
 

 Key: HBASE-13071
 URL: https://issues.apache.org/jira/browse/HBASE-13071
 Project: HBase
  Issue Type: New Feature
Reporter: Eshcar Hillel
 Attachments: HBaseStreamingScanDesign.pdf


 A scan operation iterates over all rows of a table or a subrange of the 
 table. The synchronous nature in which the data is served at the client side 
 hinders the speed the application traverses the data: it increases the 
 overall processing time, and may cause a great variance in the times the 
 application waits for the next piece of data.
 The scanner next() method at the client side invokes an RPC to the 
 regionserver and then stores the results in a cache. The application can 
 specify how many rows will be transmitted per RPC; by default this is set to 
 100 rows. 
 The cache can be considered as a producer-consumer queue, where the hbase 
 client pushes the data to the queue and the application consumes it. 
 Currently this queue is synchronous, i.e., blocking. More specifically, when 
 the application consumed all the data from the cache --- so the cache is 
 empty --- the hbase client retrieves additional data from the server and 
 re-fills the cache with new data. During this time the application is blocked.
 Under the assumption that the application processing time can be balanced by 
 the time it takes to retrieve the data, an asynchronous approach can reduce 
 the time the application is waiting for data.
 We attach a design document.
 We also have a patch that is based on a private branch, and some evaluation 
 results of this code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13072) BucketCache.evictBlock returns true if block not exists

2015-02-19 Thread zhangduo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangduo updated HBASE-13072:
-
Status: Patch Available  (was: Open)

 BucketCache.evictBlock returns true if block not exists
 ---

 Key: HBASE-13072
 URL: https://issues.apache.org/jira/browse/HBASE-13072
 Project: HBase
  Issue Type: Bug
  Components: BlockCache
Affects Versions: 0.98.10, 1.0.0, 2.0.0, 1.1.0
Reporter: zhangduo
Assignee: zhangduo
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-13072.patch


 The comment of BlockCache.evictBlock says 'true if block existed and was 
 evicted, false if not' but BucketCache does not follow it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13058) Hbase shell command 'scan' for non existent table shows unnecessary info for one unrelated existent table.

2015-02-19 Thread Abhishek Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327419#comment-14327419
 ] 

Abhishek Kumar commented on HBASE-13058:


pls let me know if above changes seems ok or should we try handling this 
particular exception in individual command files like scan.rb. 

 Hbase shell command 'scan' for non existent table shows unnecessary info for 
 one unrelated existent table.
 --

 Key: HBASE-13058
 URL: https://issues.apache.org/jira/browse/HBASE-13058
 Project: HBase
  Issue Type: Bug
  Components: Client
Reporter: Abhishek Kumar
Priority: Trivial
 Attachments: 0001-HBASE-13058-Error-messages-in-scan-table.patch


 When scanning for a non existent table in hbase shell, error message in shell 
  sometimes(based on META table content) displays completely unrelated table 
 info , which seems to be unnecessary and inconsistent with other error 
 messages:
 {noformat}
 hbase(main):016:0 scan 'noTable'
 ROW  COLUMN+CELL
 ERROR: Unknown table Table 'noTable' was not found, got: hbase:namespace.!
 -
 hbase(main):017:0 scan '01_noTable'
 ROW  COLUMN+CELL
 ERROR: Unknown table 01_noTable!
 --
 {noformat}
 Its happening when doing a META table scan (to locate input table ) and 
 scanner stops at row of another table (beyond which table can not exist) in 
 ConnectionManager.locateRegionInMeta:
 {noformat}
 private RegionLocations locateRegionInMeta(TableName tableName, byte[] row,
boolean useCache, boolean retry, int replicaId) throws 
 IOException {
 .
 
 // possible we got a region of a different table...
   if (!regionInfo.getTable().equals(tableName)) {
 throw new TableNotFoundException(
   Table ' + tableName + ' was not found, got:  +
   regionInfo.getTable() + .);
   }
 ...
 ...
 {noformat}
 Here, we can simply put a debug message(if required) and just throw the 
 TableNotFoundException(tableName)  with only tableName instead of with 
 scanner positioned row.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13071) Hbase Streaming Scan Feature

2015-02-19 Thread Eshcar Hillel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eshcar Hillel updated HBASE-13071:
--
Description: 
A scan operation iterates over all rows of a table or a subrange of the table. 
The synchronous nature in which the data is served at the client side hinders 
the speed the application traverses the data: it increases the overall 
processing time, and may cause a great variance in the times the application 
waits for the next piece of data.

The scanner next() method at the client side invokes an RPC to the regionserver 
and then stores the results in a cache. The application can specify how many 
rows will be transmitted per RPC; by default this is set to 100 rows. 
The cache can be considered as a producer-consumer queue, where the hbase 
client pushes the data to the queue and the application consumes it. Currently 
this queue is synchronous, i.e., blocking. More specifically, when the 
application consumed all the data from the cache ---so the cache is empty --- 
the hbase client retrieves additional data from the server and re-fills the 
cache with new data. During this time the application is blocked.

Under the assumption that the application processing time can be balanced by 
the time it takes to retrieve the data, an asynchronous approach can reduce the 
time the application is waiting for data.

We attach a design document.
We also have a patch that is based on a private branch, and some evaluation 
results of this code.


  was:
A scan operation iterates over all rows of a table or a subrange of the table. 
The synchronous nature in which the data is served at the client side hinders 
the speed the application traverses the data: it increases the overall 
processing time, and may cause a great variance in the times the application 
waits for the next piece of data.

The scanner next() method at the client side invokes an RPC to the regionserver 
and then stores the results in a cache. The application can specify how many 
rows will be transmitted per RPC; by default this is set to 100 rows. 
The cache can be considered as a producer-consumer queue, where the hbase 
client pushes the data to the queue and the application consumes it. Currently 
this queue is synchronous, i.e., blocking. More specifically, when the 
application consumed all the data from the cache---so the cache is empty---the 
hbase client retrieves additional data from the server and re-fills the cache 
with new data. During this time the application is blocked.

Under the assumption that the application processing time can be balanced by 
the time it takes to retrieve the data, an asynchronous approach can reduce the 
time the application is waiting for data.

We attach a design document.
We also have a patch that is based on a private branch, and some evaluation 
results of this code.



 Hbase Streaming Scan Feature
 

 Key: HBASE-13071
 URL: https://issues.apache.org/jira/browse/HBASE-13071
 Project: HBase
  Issue Type: New Feature
Reporter: Eshcar Hillel

 A scan operation iterates over all rows of a table or a subrange of the 
 table. The synchronous nature in which the data is served at the client side 
 hinders the speed the application traverses the data: it increases the 
 overall processing time, and may cause a great variance in the times the 
 application waits for the next piece of data.
 The scanner next() method at the client side invokes an RPC to the 
 regionserver and then stores the results in a cache. The application can 
 specify how many rows will be transmitted per RPC; by default this is set to 
 100 rows. 
 The cache can be considered as a producer-consumer queue, where the hbase 
 client pushes the data to the queue and the application consumes it. 
 Currently this queue is synchronous, i.e., blocking. More specifically, when 
 the application consumed all the data from the cache ---so the cache is empty 
 --- the hbase client retrieves additional data from the server and re-fills 
 the cache with new data. During this time the application is blocked.
 Under the assumption that the application processing time can be balanced by 
 the time it takes to retrieve the data, an asynchronous approach can reduce 
 the time the application is waiting for data.
 We attach a design document.
 We also have a patch that is based on a private branch, and some evaluation 
 results of this code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13072) BucketCache.evictBlock always returns true

2015-02-19 Thread zhangduo (JIRA)
zhangduo created HBASE-13072:


 Summary: BucketCache.evictBlock always returns true
 Key: HBASE-13072
 URL: https://issues.apache.org/jira/browse/HBASE-13072
 Project: HBase
  Issue Type: Bug
  Components: BlockCache
Affects Versions: 0.98.10, 1.0.0, 2.0.0, 1.1.0
Reporter: zhangduo
Assignee: zhangduo
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11


The comment of BlockCache.evictBlock says 'true if block existed and was 
evicted, false if not' but BucketCache does not follow it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13058) Hbase shell command 'scan' for non existent table shows unnecessary info for one unrelated existent table.

2015-02-19 Thread Abhishek Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327413#comment-14327413
 ] 

Abhishek Kumar commented on HBASE-13058:


.

if cause.kind_of?(org.apache.hadoop.hbase.TableNotFoundException) then
  # commented below line and using first argument
  # str = java.lang.String.new(#{cause})
  first_arg = args.first
  raise Unknown table #{first_arg}!
end



 Hbase shell command 'scan' for non existent table shows unnecessary info for 
 one unrelated existent table.
 --

 Key: HBASE-13058
 URL: https://issues.apache.org/jira/browse/HBASE-13058
 Project: HBase
  Issue Type: Bug
  Components: Client
Reporter: Abhishek Kumar
Priority: Trivial
 Attachments: 0001-HBASE-13058-Error-messages-in-scan-table.patch


 When scanning for a non existent table in hbase shell, error message in shell 
  sometimes(based on META table content) displays completely unrelated table 
 info , which seems to be unnecessary and inconsistent with other error 
 messages:
 {noformat}
 hbase(main):016:0 scan 'noTable'
 ROW  COLUMN+CELL
 ERROR: Unknown table Table 'noTable' was not found, got: hbase:namespace.!
 -
 hbase(main):017:0 scan '01_noTable'
 ROW  COLUMN+CELL
 ERROR: Unknown table 01_noTable!
 --
 {noformat}
 Its happening when doing a META table scan (to locate input table ) and 
 scanner stops at row of another table (beyond which table can not exist) in 
 ConnectionManager.locateRegionInMeta:
 {noformat}
 private RegionLocations locateRegionInMeta(TableName tableName, byte[] row,
boolean useCache, boolean retry, int replicaId) throws 
 IOException {
 .
 
 // possible we got a region of a different table...
   if (!regionInfo.getTable().equals(tableName)) {
 throw new TableNotFoundException(
   Table ' + tableName + ' was not found, got:  +
   regionInfo.getTable() + .);
   }
 ...
 ...
 {noformat}
 Here, we can simply put a debug message(if required) and just throw the 
 TableNotFoundException(tableName)  with only tableName instead of with 
 scanner positioned row.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-9604) Add metric on short-circuit reads

2015-02-19 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327353#comment-14327353
 ] 

Jean-Marc Spaggiari commented on HBASE-9604:


Sound like is... HBASE-8868, HBASE-7769 and HBASE-9604 might be regrouped under 
a single one...

 Add metric on short-circuit reads
 -

 Key: HBASE-9604
 URL: https://issues.apache.org/jira/browse/HBASE-9604
 Project: HBase
  Issue Type: Task
  Components: metrics
Reporter: stack
Assignee: stack
 Fix For: 2.0.0, 1.1.0


 Got this from a Colin message this afternoon:
 There are HDFS statistics that HBase could be checking by calling 
 DFSInputStream#getReadStatistics.  This tells you how many of your reads have 
 been remote, local, short-circuit, etc.  You could file an HBase JIRA for 
 them to roll those up into the HBase stats. Seems like a good idea to me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13071) Hbase Streaming Scan Feature

2015-02-19 Thread Eshcar Hillel (JIRA)
Eshcar Hillel created HBASE-13071:
-

 Summary: Hbase Streaming Scan Feature
 Key: HBASE-13071
 URL: https://issues.apache.org/jira/browse/HBASE-13071
 Project: HBase
  Issue Type: New Feature
Reporter: Eshcar Hillel


A scan operation iterates over all rows of a table or a subrange of the table. 
The synchronous nature in which the data is served at the client side hinders 
the speed the application traverses the data: it increases the overall 
processing time, and may cause a great variance in the times the application 
waits for the next piece of data.

The scanner next() method at the client side invokes an RPC to the regionserver 
and then stores the results in a cache. The application can specify how many 
rows will be transmitted per RPC; by default this is set to 100 rows. 
The cache can be considered as a producer-consumer queue, where the hbase 
client pushes the data to the queue and the application consumes it. Currently 
this queue is synchronous, i.e., blocking. More specifically, when the 
application consumed all the data from the cache---so the cache is empty---the 
hbase client retrieves additional data from the server and re-fills the cache 
with new data. During this time the application is blocked.

Under the assumption that the application processing time can be balanced by 
the time it takes to retrieve the data, an asynchronous approach can reduce the 
time the application is waiting for data.

We attach a design document.
We also have a patch that is based on a private branch, and some evaluation 
results of this code.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13071) Hbase Streaming Scan Feature

2015-02-19 Thread Eshcar Hillel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eshcar Hillel updated HBASE-13071:
--
Description: 
A scan operation iterates over all rows of a table or a subrange of the table. 
The synchronous nature in which the data is served at the client side hinders 
the speed the application traverses the data: it increases the overall 
processing time, and may cause a great variance in the times the application 
waits for the next piece of data.

The scanner next() method at the client side invokes an RPC to the regionserver 
and then stores the results in a cache. The application can specify how many 
rows will be transmitted per RPC; by default this is set to 100 rows. 
The cache can be considered as a producer-consumer queue, where the hbase 
client pushes the data to the queue and the application consumes it. Currently 
this queue is synchronous, i.e., blocking. More specifically, when the 
application consumed all the data from the cache --- so the cache is empty --- 
the hbase client retrieves additional data from the server and re-fills the 
cache with new data. During this time the application is blocked.

Under the assumption that the application processing time can be balanced by 
the time it takes to retrieve the data, an asynchronous approach can reduce the 
time the application is waiting for data.

We attach a design document.
We also have a patch that is based on a private branch, and some evaluation 
results of this code.


  was:
A scan operation iterates over all rows of a table or a subrange of the table. 
The synchronous nature in which the data is served at the client side hinders 
the speed the application traverses the data: it increases the overall 
processing time, and may cause a great variance in the times the application 
waits for the next piece of data.

The scanner next() method at the client side invokes an RPC to the regionserver 
and then stores the results in a cache. The application can specify how many 
rows will be transmitted per RPC; by default this is set to 100 rows. 
The cache can be considered as a producer-consumer queue, where the hbase 
client pushes the data to the queue and the application consumes it. Currently 
this queue is synchronous, i.e., blocking. More specifically, when the 
application consumed all the data from the cache ---so the cache is empty --- 
the hbase client retrieves additional data from the server and re-fills the 
cache with new data. During this time the application is blocked.

Under the assumption that the application processing time can be balanced by 
the time it takes to retrieve the data, an asynchronous approach can reduce the 
time the application is waiting for data.

We attach a design document.
We also have a patch that is based on a private branch, and some evaluation 
results of this code.



 Hbase Streaming Scan Feature
 

 Key: HBASE-13071
 URL: https://issues.apache.org/jira/browse/HBASE-13071
 Project: HBase
  Issue Type: New Feature
Reporter: Eshcar Hillel

 A scan operation iterates over all rows of a table or a subrange of the 
 table. The synchronous nature in which the data is served at the client side 
 hinders the speed the application traverses the data: it increases the 
 overall processing time, and may cause a great variance in the times the 
 application waits for the next piece of data.
 The scanner next() method at the client side invokes an RPC to the 
 regionserver and then stores the results in a cache. The application can 
 specify how many rows will be transmitted per RPC; by default this is set to 
 100 rows. 
 The cache can be considered as a producer-consumer queue, where the hbase 
 client pushes the data to the queue and the application consumes it. 
 Currently this queue is synchronous, i.e., blocking. More specifically, when 
 the application consumed all the data from the cache --- so the cache is 
 empty --- the hbase client retrieves additional data from the server and 
 re-fills the cache with new data. During this time the application is blocked.
 Under the assumption that the application processing time can be balanced by 
 the time it takes to retrieve the data, an asynchronous approach can reduce 
 the time the application is waiting for data.
 We attach a design document.
 We also have a patch that is based on a private branch, and some evaluation 
 results of this code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13072) BucketCache.evictBlock returns true if block not exists

2015-02-19 Thread zhangduo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangduo updated HBASE-13072:
-
Attachment: HBASE-13072.patch

 BucketCache.evictBlock returns true if block not exists
 ---

 Key: HBASE-13072
 URL: https://issues.apache.org/jira/browse/HBASE-13072
 Project: HBase
  Issue Type: Bug
  Components: BlockCache
Affects Versions: 1.0.0, 2.0.0, 0.98.10, 1.1.0
Reporter: zhangduo
Assignee: zhangduo
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-13072.patch


 The comment of BlockCache.evictBlock says 'true if block existed and was 
 evicted, false if not' but BucketCache does not follow it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13072) BucketCache.evictBlock returns true if block not exists

2015-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327631#comment-14327631
 ] 

Hadoop QA commented on HBASE-13072:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699662/HBASE-13072.patch
  against master branch at commit 31f17b17f0e2d12550b97098ec45ab59c5d98d58.
  ATTACHMENT ID: 12699662

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.
{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestClientPushback

 {color:red}-1 core zombie tests{color}.  There are 2 zombie test(s):   
at 
org.apache.hadoop.hbase.coprocessor.TestMasterObserver.testRegionTransitionOperations(TestMasterObserver.java:1604)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12911//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12911//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12911//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12911//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12911//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12911//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12911//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12911//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12911//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12911//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12911//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12911//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12911//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12911//console

This message is automatically generated.

 BucketCache.evictBlock returns true if block not exists
 ---

 Key: HBASE-13072
 URL: https://issues.apache.org/jira/browse/HBASE-13072
 Project: HBase
  Issue Type: Bug
  Components: BlockCache
Affects Versions: 1.0.0, 2.0.0, 0.98.10, 1.1.0
Reporter: zhangduo
Assignee: zhangduo
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-13072.patch


 The comment of BlockCache.evictBlock says 'true if block existed and was 
 evicted, false if not' but BucketCache does not follow it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-2888) Review all our metrics

2015-02-19 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-2888.
--
Resolution: Won't Fix

As per [~o...@apache.org] suggestion.  Metrics have been revamped and edited 
radically since this was filed (metrics2) and at least some of this issue was 
addressed by the refactor. Lets open specific issues to deal 

 Review all our metrics
 --

 Key: HBASE-2888
 URL: https://issues.apache.org/jira/browse/HBASE-2888
 Project: HBase
  Issue Type: Improvement
  Components: master, metrics
Reporter: Jean-Daniel Cryans

 HBase publishes a bunch of metrics, some useful some wasteful, that should be 
 improved to deliver a better ops experience. Examples:
  - Block cache hit ratio converges at some point and stops moving
  - fsReadLatency goes down when compactions are running
  - storefileIndexSizeMB is the exact same number once a system is serving 
 production load
 We could use new metrics too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13054) Provide more tracing information for locking/latching events.

2015-02-19 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated HBASE-13054:

Attachment: HBASE-13054_v2.patch

 Thanks for review [~apurtell]. Here is the patch adding Trace.isTracing check 
in HRegion.

 Provide more tracing information for locking/latching events.
 -

 Key: HBASE-13054
 URL: https://issues.apache.org/jira/browse/HBASE-13054
 Project: HBase
  Issue Type: Improvement
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 2.0.0, 1.0.1, 1.1.0

 Attachments: HBASE-13054.patch, HBASE-13054_v2.patch


 Currently not much tracing information available for locking and latching 
 events like row level locking during do mini batch mutations, region level 
 locking during flush, close and so on. It will be better to add the trace 
 information for such events so that it will be useful for finding time spent 
 on locking and waiting time on locks while analyzing performance issues in 
 queries using trace information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13072) BucketCache.evictBlock returns true if block does not exist

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327943#comment-14327943
 ] 

Hudson commented on HBASE-13072:


FAILURE: Integrated in HBase-TRUNK #6150 (See 
[https://builds.apache.org/job/HBase-TRUNK/6150/])
HBASE-13072 BucketCache.evictBlock returns true if block does not exist (Duo 
Zhang) (tedyu: rev 18402cc850b143bc6f88d90e62c42b9ef4131ca6)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java


 BucketCache.evictBlock returns true if block does not exist
 ---

 Key: HBASE-13072
 URL: https://issues.apache.org/jira/browse/HBASE-13072
 Project: HBase
  Issue Type: Bug
  Components: BlockCache
Affects Versions: 1.0.0, 2.0.0, 0.98.10, 1.1.0
Reporter: zhangduo
Assignee: zhangduo
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-13072.patch


 The comment of BlockCache.evictBlock says 'true if block existed and was 
 evicted, false if not' but BucketCache does not follow it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13054) Provide more tracing information for locking/latching events.

2015-02-19 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated HBASE-13054:

Fix Version/s: 0.98.11
 Hadoop Flags: Reviewed
   Status: Patch Available  (was: Open)

 Provide more tracing information for locking/latching events.
 -

 Key: HBASE-13054
 URL: https://issues.apache.org/jira/browse/HBASE-13054
 Project: HBase
  Issue Type: Improvement
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-13054.patch, HBASE-13054_v2.patch


 Currently not much tracing information available for locking and latching 
 events like row level locking during do mini batch mutations, region level 
 locking during flush, close and so on. It will be better to add the trace 
 information for such events so that it will be useful for finding time spent 
 on locking and waiting time on locks while analyzing performance issues in 
 queries using trace information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13074) Clean up old code around hbase.master.lease.thread.wakefrequency as it is not used anymore..

2015-02-19 Thread Sunil (JIRA)
Sunil created HBASE-13074:
-

 Summary: Clean up old code around 
hbase.master.lease.thread.wakefrequency as it is not used anymore..
 Key: HBASE-13074
 URL: https://issues.apache.org/jira/browse/HBASE-13074
 Project: HBase
  Issue Type: Task
  Components: wal
Reporter: Sunil
Priority: Trivial


While checking for configs to tweak, I ran into 
hbase.master.lease.thread.wakefrequency, but it has been deprecated. There are 
however still references of it in a few tests classes so just cleaning it up..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10900) FULL table backup and restore

2015-02-19 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328182#comment-14328182
 ] 

Jerry He commented on HBASE-10900:
--

Had offline discussion with [~nidmhbase] and [~apurtell]. 
Will keep the JIRA open so that people from our team or other people can work 
in it in the future.
Thanks, Demai, Andrew.

 FULL table backup and restore
 -

 Key: HBASE-10900
 URL: https://issues.apache.org/jira/browse/HBASE-10900
 Project: HBase
  Issue Type: Task
Reporter: Demai Ni
 Attachments: HBASE-10900-fullbackup-trunk-v1.patch, 
 HBASE-10900-trunk-v2.patch, HBASE-10900-trunk-v3.patch, 
 HBASE-10900-trunk-v4.patch


 h2. Feature Description
 This is a subtask of 
 [HBase-7912|https://issues.apache.org/jira/browse/HBASE-7912] to support FULL 
 backup/restore, and will complete the following function:
 {code:title=Backup Restore example|borderStyle=solid}
 /* backup from sourcecluster to targetcluster 
  */
 /* if no table name specified, all tables from source cluster will be 
 backuped */
 [sourcecluster]$ hbase backup create full 
 hdfs://hostname.targetcluster.org:9000/userid/backupdir t1_dn,t2_dn,t3_dn
 /* restore on targetcluser, this is a local restore   
   */
 /* backup_1396650096738 - backup image name   
   */
 /* t1_dn,etc are the original table names. All tables will be restored if not 
 specified */
 /* t1_dn_restore, etc. are the restored table. if not specified, orginal 
 table name will be used*/
 [targetcluster]$ hbase restore /userid/backupdir backup_1396650096738 
 t1_dn,t2_dn,t3_dn t1_dn_restore,t2_dn_restore,t3_dn_restore
 /* restore from targetcluster back to source cluster, this is a remote restore
 [sourcecluster]$ hbase restore 
 hdfs://hostname.targetcluster.org:9000/userid/backupdir backup_1396650096738 
 t1_dn,t2_dn,t3_dn t1_dn_restore,t2_dn_restore,t3_dn_restore
 {code}
 h2. Detail layout and frame work for the next jiras
 The patch is a wrapper of the existing snapshot and exportSnapshot, and will 
 use as the base framework for the over-all solution of  
 [HBase-7912|https://issues.apache.org/jira/browse/HBASE-7912] as described 
 below:
 * *bin/hbase*  : end-user command line interface to invoke 
 BackupClient and RestoreClient
 * *BackupClient.java*  : 'main' entry for backup operations. This patch will 
 only support 'full' backup. In future jiras, will support:
 ** *create* incremental backup
 ** *cancel* an ongoing backup
 ** *delete* an exisitng backup image
 ** *describe* the detailed informaiton of backup image
 ** show *history* of all successful backups 
 ** show the *status* of the latest backup request
 ** *convert* incremental backup WAL files into HFiles.  either on-the-fly 
 during create or after create
 ** *merge* backup image
 ** *stop* backup a table of existing backup image
 ** *show* tables of a backup image 
 * *BackupCommands.java* : a place to keep all the command usages and options
 * *BackupManager.java*  : handle backup requests on server-side, create 
 BACKUP ZOOKEEPER nodes to keep track backup. The timestamps kept in zookeeper 
 will be used for future incremental backup (not included in this jira). 
 Create BackupContext and DispatchRequest. 
 * *BackupHandler.java*  : in this patch, it is a wrapper of snapshot and 
 exportsnapshot. In future jiras, 
 ** *timestamps* info will be recorded in ZK
 ** carry on *incremental* backup.  
 ** update backup *progress*
 ** set flags of *status*
 ** build up *backupManifest* file(in this jira only limited info for 
 fullback. later on, timestamps and dependency of multipl backup images are 
 also recorded here)
 ** clean up after *failed* backup 
 ** clean up after *cancelled* backup
 ** allow on-the-fly *convert* during incremental backup 
 * *BackupContext.java* : encapsulate backup information like backup ID, table 
 names, directory info, phase, TimeStamps of backup progress, size of data, 
 ancestor info, etc. 
 * *BackupCopier.java*  : the copying operation.  Later on, to support 
 progress report and mapper estimation; and extends DisCp for progress 
 updating to ZK during backup. 
 * *BackupExcpetion.java*: to handle exception from backup/restore
 * *BackupManifest.java* : encapsulate all the backup image information. The 
 manifest info will be bundled as manifest file together with data. So that 
 each backup image will contain all the info needed for restore. 
 * *BackupStatus.java*   : encapsulate backup status at table level during 
 backup progress
 * *BackupUtil.java* : utility methods during backup process
 * *RestoreClient.java*  : 'main' entry for restore operations. This patch 
 will only support 'full' backup. 
 * *RestoreUtil.java*: 

[jira] [Commented] (HBASE-13075) TableInputFormatBase spuriously warning about multiple initializeTable calls

2015-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328184#comment-14328184
 ] 

Hadoop QA commented on HBASE-13075:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12699730/HBASE-13075.1.patch.txt
  against master branch at commit 18402cc850b143bc6f88d90e62c42b9ef4131ca6.
  ATTACHMENT ID: 12699730

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.
{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithDeletes

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.coprocessor.TestMasterObserver.testRegionTransitionOperations(TestMasterObserver.java:1604)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12915//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12915//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12915//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12915//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12915//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12915//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12915//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12915//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12915//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12915//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12915//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12915//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12915//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12915//console

This message is automatically generated.

 TableInputFormatBase spuriously warning about multiple initializeTable calls
 

 Key: HBASE-13075
 URL: https://issues.apache.org/jira/browse/HBASE-13075
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 1.0.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Minor
 Fix For: 1.0.1, 1.1.0, 2.2.0

 Attachments: HBASE-13075.1.patch.txt


 TableInputFormatBase incorrectly checks a local variable (that can't be null) 
 rather than the instance variable (which can be null) to see if it has been 
 called multiple times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13075) TableInputFormatBase spuriously warning about multiple initializeTable calls

2015-02-19 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328195#comment-14328195
 ] 

Sean Busbey commented on HBASE-13075:
-

test failures are unrelated AFAICT.

 TableInputFormatBase spuriously warning about multiple initializeTable calls
 

 Key: HBASE-13075
 URL: https://issues.apache.org/jira/browse/HBASE-13075
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 1.0.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Minor
 Fix For: 1.0.1, 1.1.0, 2.2.0

 Attachments: HBASE-13075.1.patch.txt


 TableInputFormatBase incorrectly checks a local variable (that can't be null) 
 rather than the instance variable (which can be null) to see if it has been 
 called multiple times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13075) TableInputFormatBase spuriously warning about multiple initializeTable calls

2015-02-19 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328202#comment-14328202
 ] 

Ted Yu commented on HBASE-13075:


+1

 TableInputFormatBase spuriously warning about multiple initializeTable calls
 

 Key: HBASE-13075
 URL: https://issues.apache.org/jira/browse/HBASE-13075
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 1.0.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Minor
 Fix For: 1.0.1, 1.1.0, 2.2.0

 Attachments: HBASE-13075.1.patch.txt


 TableInputFormatBase incorrectly checks a local variable (that can't be null) 
 rather than the instance variable (which can be null) to see if it has been 
 called multiple times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13077) BoundedCompletionService doesn't pass trace info to server

2015-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328559#comment-14328559
 ] 

Hadoop QA commented on HBASE-13077:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699802/HBASE-13077.patch
  against master branch at commit 03d8918142681d4c8abe40e8c8fb32307756d8a8.
  ATTACHMENT ID: 12699802

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.
{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.coprocessor.TestMasterObserver.testRegionTransitionOperations(TestMasterObserver.java:1604)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12920//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12920//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12920//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12920//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12920//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12920//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12920//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12920//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12920//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12920//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12920//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12920//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12920//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12920//console

This message is automatically generated.

 BoundedCompletionService doesn't pass trace info to server
 --

 Key: HBASE-13077
 URL: https://issues.apache.org/jira/browse/HBASE-13077
 Project: HBase
  Issue Type: Bug
  Components: hbase
Affects Versions: 1.0.0, 2.0.0, 1.1.0
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
 Attachments: HBASE-13077.patch


 Today [~ndimiduk]  I found that BoundedCompletionService doesn't pass htrace 
 info to server. This issue causes scan doesn't pass trace info to server.
 [~enis] FYI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13070) Fix TestCacheOnWrite

2015-02-19 Thread zhangduo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangduo updated HBASE-13070:
-
Attachment: HBASE-13070_1.patch

Turn off prefetchOnOpen. Add log when clearing BlockCache more than one time.

 Fix TestCacheOnWrite
 

 Key: HBASE-13070
 URL: https://issues.apache.org/jira/browse/HBASE-13070
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: zhangduo
Assignee: zhangduo
 Attachments: HBASE-13070.patch, HBASE-13070_1.patch


 TestCacheOnWrite uses TestHFileWriterV2.randomOrderedKey to generate a random 
 byte array, then use first 32 bytes as row and remaining part as family and 
 qualifier. But TestHFileWriterV2.randomOrderedKey may return a byte array 
 only contains 32 bytes, so there will be zero length family and qualifier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13070) Fix TestCacheOnWrite

2015-02-19 Thread zhangduo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangduo updated HBASE-13070:
-
Attachment: HBASE-13070_2.patch

Add log if more than 2 evictions are done.

 Fix TestCacheOnWrite
 

 Key: HBASE-13070
 URL: https://issues.apache.org/jira/browse/HBASE-13070
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: zhangduo
Assignee: zhangduo
 Attachments: HBASE-13070.patch, HBASE-13070_1.patch, 
 HBASE-13070_2.patch


 TestCacheOnWrite uses TestHFileWriterV2.randomOrderedKey to generate a random 
 byte array, then use first 32 bytes as row and remaining part as family and 
 qualifier. But TestHFileWriterV2.randomOrderedKey may return a byte array 
 only contains 32 bytes, so there will be zero length family and qualifier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13079) Add an admonition to Scans example that the results scanner should be closed

2015-02-19 Thread Misty Stanley-Jones (JIRA)
Misty Stanley-Jones created HBASE-13079:
---

 Summary: Add an admonition to Scans example that the results 
scanner should be closed
 Key: HBASE-13079
 URL: https://issues.apache.org/jira/browse/HBASE-13079
 Project: HBase
  Issue Type: Task
  Components: documentation
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones


It seems to be a frequent occurrence that developers forget to close the 
scanner. It's in a comment now but may be missed. Add an admonition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13071) Hbase Streaming Scan Feature

2015-02-19 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328514#comment-14328514
 ] 

Lars Hofhansl commented on HBASE-13071:
---

Let's close one of these issues.

 Hbase Streaming Scan Feature
 

 Key: HBASE-13071
 URL: https://issues.apache.org/jira/browse/HBASE-13071
 Project: HBase
  Issue Type: New Feature
Reporter: Eshcar Hillel
 Attachments: HBaseStreamingScanDesign.pdf


 A scan operation iterates over all rows of a table or a subrange of the 
 table. The synchronous nature in which the data is served at the client side 
 hinders the speed the application traverses the data: it increases the 
 overall processing time, and may cause a great variance in the times the 
 application waits for the next piece of data.
 The scanner next() method at the client side invokes an RPC to the 
 regionserver and then stores the results in a cache. The application can 
 specify how many rows will be transmitted per RPC; by default this is set to 
 100 rows. 
 The cache can be considered as a producer-consumer queue, where the hbase 
 client pushes the data to the queue and the application consumes it. 
 Currently this queue is synchronous, i.e., blocking. More specifically, when 
 the application consumed all the data from the cache --- so the cache is 
 empty --- the hbase client retrieves additional data from the server and 
 re-fills the cache with new data. During this time the application is blocked.
 Under the assumption that the application processing time can be balanced by 
 the time it takes to retrieve the data, an asynchronous approach can reduce 
 the time the application is waiting for data.
 We attach a design document.
 We also have a patch that is based on a private branch, and some evaluation 
 results of this code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13079) Add an admonition to Scans example that the results scanner should be closed

2015-02-19 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-13079:

Status: Patch Available  (was: Open)

 Add an admonition to Scans example that the results scanner should be closed
 

 Key: HBASE-13079
 URL: https://issues.apache.org/jira/browse/HBASE-13079
 Project: HBase
  Issue Type: Task
  Components: documentation
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
 Attachments: HBASE-13079.patch


 It seems to be a frequent occurrence that developers forget to close the 
 scanner. It's in a comment now but may be missed. Add an admonition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11544) [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME

2015-02-19 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328516#comment-14328516
 ] 

Lars Hofhansl commented on HBASE-11544:
---

Sounds good.
I'll fix the array size issues tonight.

 [Ergonomics] hbase.client.scanner.caching is dogged and will try to return 
 batch even if it means OOME
 --

 Key: HBASE-11544
 URL: https://issues.apache.org/jira/browse/HBASE-11544
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: Jonathan Lawlor
Priority: Critical
  Labels: beginner
 Attachments: HBASE-11544-v1.patch, HBASE-11544-v2.patch


 Running some tests, I set hbase.client.scanner.caching=1000.  Dataset has 
 large cells.  I kept OOME'ing.
 Serverside, we should measure how much we've accumulated and return to the 
 client whatever we've gathered once we pass out a certain size threshold 
 rather than keep accumulating till we OOME.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13079) Add an admonition to Scans example that the results scanner should be closed

2015-02-19 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-13079:

Attachment: HBASE-13079.patch

 Add an admonition to Scans example that the results scanner should be closed
 

 Key: HBASE-13079
 URL: https://issues.apache.org/jira/browse/HBASE-13079
 Project: HBase
  Issue Type: Task
  Components: documentation
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
 Attachments: HBASE-13079.patch


 It seems to be a frequent occurrence that developers forget to close the 
 scanner. It's in a comment now but may be missed. Add an admonition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13079) Add an admonition to Scans example that the results scanner should be closed

2015-02-19 Thread Srikanth Srungarapu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328522#comment-14328522
 ] 

Srikanth Srungarapu commented on HBASE-13079:
-

+1 (non-binding)

 Add an admonition to Scans example that the results scanner should be closed
 

 Key: HBASE-13079
 URL: https://issues.apache.org/jira/browse/HBASE-13079
 Project: HBase
  Issue Type: Task
  Components: documentation
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
 Attachments: HBASE-13079.patch


 It seems to be a frequent occurrence that developers forget to close the 
 scanner. It's in a comment now but may be missed. Add an admonition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13056) Refactor table.jsp code to remove repeated code and make it easier to add new checks

2015-02-19 Thread Vikas Vishwakarma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Vishwakarma updated HBASE-13056:
--
Attachment: (was: HBASE-13056-0.98.patch)

 Refactor table.jsp code to remove repeated code and make it easier to add new 
 checks
 

 Key: HBASE-13056
 URL: https://issues.apache.org/jira/browse/HBASE-13056
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Vikas Vishwakarma
Assignee: Vikas Vishwakarma
 Fix For: 2.0.0

 Attachments: HBASE-13056.patch


 While trying to fix HBASE-13001, I realized that there is lot of html code 
 repetition in table.jsp which is making addition of new checks slightly 
 difficult in the sense I will have to:
 1. Add the check at multiple places in the code
 Or 
 2. Repeat the html code again for the new check 
 So I am proposing to re-factor table.jsp code such that the common html 
 header/body is loaded without any condition check and then we generate the 
 condition specific html code 
 snapshot.jsp follows the same format as explained below:
 {noformat}
 Current implementation:
 
 if( x ) {
   title_x
   common_html_header
   common_html_body
   x_specific_html_body
 } else {
   title_y
   common_html_header
   common_html_body
   y_specific_html_body
 }
 New Implementation:
 ==
 if( x ) {
   title_x
 } else {
   title_y
 }
 common_html_header
 common_html_body
 if( x ) {
   x_specific_html_body
 } else {
   y_specific_html_body
 }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13056) Refactor table.jsp code to remove repeated code and make it easier to add new checks

2015-02-19 Thread Vikas Vishwakarma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Vishwakarma updated HBASE-13056:
--
Attachment: HBASE-13056-0.98.patch

 Refactor table.jsp code to remove repeated code and make it easier to add new 
 checks
 

 Key: HBASE-13056
 URL: https://issues.apache.org/jira/browse/HBASE-13056
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Vikas Vishwakarma
Assignee: Vikas Vishwakarma
 Fix For: 2.0.0

 Attachments: HBASE-13056-0.98.patch, HBASE-13056.patch


 While trying to fix HBASE-13001, I realized that there is lot of html code 
 repetition in table.jsp which is making addition of new checks slightly 
 difficult in the sense I will have to:
 1. Add the check at multiple places in the code
 Or 
 2. Repeat the html code again for the new check 
 So I am proposing to re-factor table.jsp code such that the common html 
 header/body is loaded without any condition check and then we generate the 
 condition specific html code 
 snapshot.jsp follows the same format as explained below:
 {noformat}
 Current implementation:
 
 if( x ) {
   title_x
   common_html_header
   common_html_body
   x_specific_html_body
 } else {
   title_y
   common_html_header
   common_html_body
   y_specific_html_body
 }
 New Implementation:
 ==
 if( x ) {
   title_x
 } else {
   title_y
 }
 common_html_header
 common_html_body
 if( x ) {
   x_specific_html_body
 } else {
   y_specific_html_body
 }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13056) Refactor table.jsp code to remove repeated code and make it easier to add new checks

2015-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328556#comment-14328556
 ] 

Hadoop QA commented on HBASE-13056:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12699811/HBASE-13056-0.98.patch
  against 0.98 branch at commit 03d8918142681d4c8abe40e8c8fb32307756d8a8.
  ATTACHMENT ID: 12699811

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12922//console

This message is automatically generated.

 Refactor table.jsp code to remove repeated code and make it easier to add new 
 checks
 

 Key: HBASE-13056
 URL: https://issues.apache.org/jira/browse/HBASE-13056
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Vikas Vishwakarma
Assignee: Vikas Vishwakarma
 Fix For: 2.0.0

 Attachments: HBASE-13056-0.98.patch, HBASE-13056.patch


 While trying to fix HBASE-13001, I realized that there is lot of html code 
 repetition in table.jsp which is making addition of new checks slightly 
 difficult in the sense I will have to:
 1. Add the check at multiple places in the code
 Or 
 2. Repeat the html code again for the new check 
 So I am proposing to re-factor table.jsp code such that the common html 
 header/body is loaded without any condition check and then we generate the 
 condition specific html code 
 snapshot.jsp follows the same format as explained below:
 {noformat}
 Current implementation:
 
 if( x ) {
   title_x
   common_html_header
   common_html_body
   x_specific_html_body
 } else {
   title_y
   common_html_header
   common_html_body
   y_specific_html_body
 }
 New Implementation:
 ==
 if( x ) {
   title_x
 } else {
   title_y
 }
 common_html_header
 common_html_body
 if( x ) {
   x_specific_html_body
 } else {
   y_specific_html_body
 }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13079) Add an admonition to Scans example that the results scanner should be closed

2015-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328586#comment-14328586
 ] 

Hadoop QA commented on HBASE-13079:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699805/HBASE-13079.patch
  against master branch at commit 03d8918142681d4c8abe40e8c8fb32307756d8a8.
  ATTACHMENT ID: 12699805

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.
{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+Operations are applied via 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html[Table]
 instances. See hbase_apis for examples.
+link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html[Scan]
 allow iteration over adjacent rows for specified attributes.
+The easiest way to specify a specific stop point for a scan is by using the 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/InclusiveStopFilter.html[InclusiveStopFilter]
 class.

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.client.TestClientScannerRPCTimeout

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12921//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12921//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12921//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12921//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12921//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12921//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12921//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12921//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12921//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12921//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12921//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12921//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12921//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12921//console

This message is automatically generated.

 Add an admonition to Scans example that the results scanner should be closed
 

 Key: HBASE-13079
 URL: https://issues.apache.org/jira/browse/HBASE-13079
 Project: HBase
  Issue Type: Task
  Components: documentation
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
 Fix For: 2.0.0

 Attachments: HBASE-13079.patch


 It seems to be a frequent occurrence that developers forget to close the 
 scanner. It's in a comment now but may be missed. Add an admonition.



--
This message was sent by 

[jira] [Updated] (HBASE-13079) Add an admonition to Scans example that the results scanner should be closed

2015-02-19 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-13079:

   Resolution: Fixed
Fix Version/s: 2.0.0
   Status: Resolved  (was: Patch Available)

Got other +1s offline, committed with a little expansion to the note.

 Add an admonition to Scans example that the results scanner should be closed
 

 Key: HBASE-13079
 URL: https://issues.apache.org/jira/browse/HBASE-13079
 Project: HBase
  Issue Type: Task
  Components: documentation
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
 Fix For: 2.0.0

 Attachments: HBASE-13079.patch


 It seems to be a frequent occurrence that developers forget to close the 
 scanner. It's in a comment now but may be missed. Add an admonition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13072) BucketCache.evictBlock returns true if block does not exist

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328528#comment-14328528
 ] 

Hudson commented on HBASE-13072:


SUCCESS: Integrated in HBase-0.98 #863 (See 
[https://builds.apache.org/job/HBase-0.98/863/])
HBASE-13072 BucketCache.evictBlock returns true if block does not exist (Duo 
Zhang) (tedyu: rev 5301968365df20891c962d5632e5205005ef9e99)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java


 BucketCache.evictBlock returns true if block does not exist
 ---

 Key: HBASE-13072
 URL: https://issues.apache.org/jira/browse/HBASE-13072
 Project: HBase
  Issue Type: Bug
  Components: BlockCache
Affects Versions: 1.0.0, 2.0.0, 0.98.10, 1.1.0
Reporter: zhangduo
Assignee: zhangduo
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-13072-0.98.patch, HBASE-13072.patch


 The comment of BlockCache.evictBlock says 'true if block existed and was 
 evicted, false if not' but BucketCache does not follow it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13056) Refactor table.jsp code to remove repeated code and make it easier to add new checks

2015-02-19 Thread Vikas Vishwakarma (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328551#comment-14328551
 ] 

Vikas Vishwakarma commented on HBASE-13056:
---

today i forked the hbase build and then tried to push the changes against it, 
that also worked fine. 
https://github.com/vikkarma/hbase/commit/78d77f54297ca73a0f15687f9abea67ce4d4c197

giving it one more try against pre-commit 

 Refactor table.jsp code to remove repeated code and make it easier to add new 
 checks
 

 Key: HBASE-13056
 URL: https://issues.apache.org/jira/browse/HBASE-13056
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Vikas Vishwakarma
Assignee: Vikas Vishwakarma
 Fix For: 2.0.0

 Attachments: HBASE-13056-0.98.patch, HBASE-13056.patch


 While trying to fix HBASE-13001, I realized that there is lot of html code 
 repetition in table.jsp which is making addition of new checks slightly 
 difficult in the sense I will have to:
 1. Add the check at multiple places in the code
 Or 
 2. Repeat the html code again for the new check 
 So I am proposing to re-factor table.jsp code such that the common html 
 header/body is loaded without any condition check and then we generate the 
 condition specific html code 
 snapshot.jsp follows the same format as explained below:
 {noformat}
 Current implementation:
 
 if( x ) {
   title_x
   common_html_header
   common_html_body
   x_specific_html_body
 } else {
   title_y
   common_html_header
   common_html_body
   y_specific_html_body
 }
 New Implementation:
 ==
 if( x ) {
   title_x
 } else {
   title_y
 }
 common_html_header
 common_html_body
 if( x ) {
   x_specific_html_body
 } else {
   y_specific_html_body
 }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13072) BucketCache.evictBlock returns true if block does not exist

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328564#comment-14328564
 ] 

Hudson commented on HBASE-13072:


SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #821 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/821/])
HBASE-13072 BucketCache.evictBlock returns true if block does not exist (Duo 
Zhang) (tedyu: rev 5301968365df20891c962d5632e5205005ef9e99)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java


 BucketCache.evictBlock returns true if block does not exist
 ---

 Key: HBASE-13072
 URL: https://issues.apache.org/jira/browse/HBASE-13072
 Project: HBase
  Issue Type: Bug
  Components: BlockCache
Affects Versions: 1.0.0, 2.0.0, 0.98.10, 1.1.0
Reporter: zhangduo
Assignee: zhangduo
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-13072-0.98.patch, HBASE-13072.patch


 The comment of BlockCache.evictBlock says 'true if block existed and was 
 evicted, false if not' but BucketCache does not follow it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13070) Fix TestCacheOnWrite

2015-02-19 Thread zhangduo (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328605#comment-14328605
 ] 

zhangduo commented on HBASE-13070:
--

Oh I think I found the problem.

We turned on hfile prefetching in this test after HBASE-12270(maybe a mistake).
See 
https://builds.apache.org/job/HBase-TRUNK/6110/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite-output.txt
{noformat}
2015-02-10 06:48:00,128 DEBUG [main] hfile.PrefetchExecutor(102): Prefetch 
requested for 
/home/jenkins/jenkins-slave/workspace/HBase-TRUNK/hbase-server/target/test-data/170a3172-9a2e-4269-8085-8433f46141c6/test_cache_on_write/b7d5cfcd4b57411eaac3ebbb83626249,
 delay=905 ms
{noformat}
And see 
https://builds.apache.org/job/HBase-TRUNK/5796/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite-output.txt
Grep Prefetch requested for and get nothing.

And we can see that, the prefetch operation has a delay, usually nearly 1 sec 
in tests, so if the test run fast enough then there is no problem. But if we 
run the test on a slow machine then BlockCache maybe ruined before we finish 
checking cached blocks and make the test fail.

Thanks [~tedyu] to let me add a log when clearing BlockCache multiple times 
then I found the actual issue.

 Fix TestCacheOnWrite
 

 Key: HBASE-13070
 URL: https://issues.apache.org/jira/browse/HBASE-13070
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: zhangduo
Assignee: zhangduo
 Attachments: HBASE-13070.patch


 TestCacheOnWrite uses TestHFileWriterV2.randomOrderedKey to generate a random 
 byte array, then use first 32 bytes as row and remaining part as family and 
 qualifier. But TestHFileWriterV2.randomOrderedKey may return a byte array 
 only contains 32 bytes, so there will be zero length family and qualifier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13071) Hbase Streaming Scan Feature

2015-02-19 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328607#comment-14328607
 ] 

Lars Hofhansl commented on HBASE-13071:
---

There are many ways to do this:
# managing two buffers, one is filled by a background thread, the other used by 
the client thread, then switched.
# managing a queue on the client. The user thread polls from it, a background 
thread pushed data in as it gets it from the server. A blocking queue makes 
this simple, but comes with synchronization overhead.

In any event, unless we rewrite client and server to support true streaming, it 
means extra buffering of some form regardless of the implementation.


 Hbase Streaming Scan Feature
 

 Key: HBASE-13071
 URL: https://issues.apache.org/jira/browse/HBASE-13071
 Project: HBase
  Issue Type: New Feature
Reporter: Eshcar Hillel
 Attachments: HBaseStreamingScanDesign.pdf


 A scan operation iterates over all rows of a table or a subrange of the 
 table. The synchronous nature in which the data is served at the client side 
 hinders the speed the application traverses the data: it increases the 
 overall processing time, and may cause a great variance in the times the 
 application waits for the next piece of data.
 The scanner next() method at the client side invokes an RPC to the 
 regionserver and then stores the results in a cache. The application can 
 specify how many rows will be transmitted per RPC; by default this is set to 
 100 rows. 
 The cache can be considered as a producer-consumer queue, where the hbase 
 client pushes the data to the queue and the application consumes it. 
 Currently this queue is synchronous, i.e., blocking. More specifically, when 
 the application consumed all the data from the cache --- so the cache is 
 empty --- the hbase client retrieves additional data from the server and 
 re-fills the cache with new data. During this time the application is blocked.
 Under the assumption that the application processing time can be balanced by 
 the time it takes to retrieve the data, an asynchronous approach can reduce 
 the time the application is waiting for data.
 We attach a design document.
 We also have a patch that is based on a private branch, and some evaluation 
 results of this code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11544) [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME

2015-02-19 Thread Jonathan Lawlor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Lawlor updated HBASE-11544:

Status: Open  (was: Patch Available)

 [Ergonomics] hbase.client.scanner.caching is dogged and will try to return 
 batch even if it means OOME
 --

 Key: HBASE-11544
 URL: https://issues.apache.org/jira/browse/HBASE-11544
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: Jonathan Lawlor
Priority: Critical
  Labels: beginner
 Attachments: HBASE-11544-v1.patch


 Running some tests, I set hbase.client.scanner.caching=1000.  Dataset has 
 large cells.  I kept OOME'ing.
 Serverside, we should measure how much we've accumulated and return to the 
 client whatever we've gathered once we pass out a certain size threshold 
 rather than keep accumulating till we OOME.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12953) RegionServer is not functionally working with AysncRpcClient in secure mode

2015-02-19 Thread Andrey Stepachev (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328278#comment-14328278
 ] 

Andrey Stepachev commented on HBASE-12953:
--

i meant looks like it is unrelated bug, need to dig deeper.

 RegionServer is not functionally working with AysncRpcClient in secure mode
 ---

 Key: HBASE-12953
 URL: https://issues.apache.org/jira/browse/HBASE-12953
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0, 1.1.0
Reporter: Ashish Singhi
Assignee: zhangduo
Priority: Critical
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-12953.patch, HBASE-12953_1.patch, 
 HBASE-12953_2.patch, HBASE-12953_2.patch, HBASE-12953_2.patch, 
 HBASE-12953_2.patch, HBASE-12953_2.patch, HBASE-12953_3 (2).patch, 
 HBASE-12953_3.patch, HBASE-12953_3.patch, HBASE-12953_3.patch, testcase.patch


 HBase version 2.0.0
 Default value for {{hbase.rpc.client.impl}} is set to AsyncRpcClient.
 When trying to install HBase with Kerberos, RegionServer is not working 
 functionally.
 The following log is logged in its log file
 {noformat}
 2015-02-02 14:59:05,407 WARN  [AsyncRpcChannel-pool1-t1] 
 channel.DefaultChannelPipeline: An exceptionCaught() event was fired, and it 
 reached at the tail of the pipeline. It usually means the last handler in the 
 pipeline did not handle the exception.
 io.netty.channel.ChannelPipelineException: 
 org.apache.hadoop.hbase.security.SaslClientHandler.handlerAdded() has thrown 
 an exception; removed.
   at 
 io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:499)
   at 
 io.netty.channel.DefaultChannelPipeline.callHandlerAdded(DefaultChannelPipeline.java:481)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst0(DefaultChannelPipeline.java:114)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:97)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:235)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:214)
   at 
 org.apache.hadoop.hbase.ipc.AsyncRpcChannel$2.operationComplete(AsyncRpcChannel.java:194)
   at 
 org.apache.hadoop.hbase.ipc.AsyncRpcChannel$2.operationComplete(AsyncRpcChannel.java:157)
   at 
 io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
   at 
 io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
   at 
 io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
   at 
 io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
   at 
 io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
   at 
 io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:253)
   at 
 io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:288)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
   at 
 io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused by 
 GSSException: No valid credentials provided (Mechanism level: Failed to find 
 any Kerberos tgt)]
   at 
 com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
   at 
 org.apache.hadoop.hbase.security.SaslClientHandler.handlerAdded(SaslClientHandler.java:154)
   at 
 io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:486)
   ... 20 more
 Caused by: GSSException: No valid credentials provided (Mechanism level: 
 Failed to find any Kerberos tgt)
   at 
 sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
   at 
 sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
   at 
 sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
   at 
 sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
   at 
 sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
   at 
 sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
   at 
 com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)
 

[jira] [Commented] (HBASE-12953) RegionServer is not functionally working with AysncRpcClient in secure mode

2015-02-19 Thread Andrey Stepachev (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328277#comment-14328277
 ] 

Andrey Stepachev commented on HBASE-12953:
--

[~stack] seems there is a problem, some regions are suddenly start to open. 
created jira for that HBASE-13076

 RegionServer is not functionally working with AysncRpcClient in secure mode
 ---

 Key: HBASE-12953
 URL: https://issues.apache.org/jira/browse/HBASE-12953
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0, 1.1.0
Reporter: Ashish Singhi
Assignee: zhangduo
Priority: Critical
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-12953.patch, HBASE-12953_1.patch, 
 HBASE-12953_2.patch, HBASE-12953_2.patch, HBASE-12953_2.patch, 
 HBASE-12953_2.patch, HBASE-12953_2.patch, HBASE-12953_3 (2).patch, 
 HBASE-12953_3.patch, HBASE-12953_3.patch, HBASE-12953_3.patch, testcase.patch


 HBase version 2.0.0
 Default value for {{hbase.rpc.client.impl}} is set to AsyncRpcClient.
 When trying to install HBase with Kerberos, RegionServer is not working 
 functionally.
 The following log is logged in its log file
 {noformat}
 2015-02-02 14:59:05,407 WARN  [AsyncRpcChannel-pool1-t1] 
 channel.DefaultChannelPipeline: An exceptionCaught() event was fired, and it 
 reached at the tail of the pipeline. It usually means the last handler in the 
 pipeline did not handle the exception.
 io.netty.channel.ChannelPipelineException: 
 org.apache.hadoop.hbase.security.SaslClientHandler.handlerAdded() has thrown 
 an exception; removed.
   at 
 io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:499)
   at 
 io.netty.channel.DefaultChannelPipeline.callHandlerAdded(DefaultChannelPipeline.java:481)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst0(DefaultChannelPipeline.java:114)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:97)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:235)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:214)
   at 
 org.apache.hadoop.hbase.ipc.AsyncRpcChannel$2.operationComplete(AsyncRpcChannel.java:194)
   at 
 org.apache.hadoop.hbase.ipc.AsyncRpcChannel$2.operationComplete(AsyncRpcChannel.java:157)
   at 
 io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
   at 
 io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
   at 
 io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
   at 
 io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
   at 
 io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
   at 
 io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:253)
   at 
 io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:288)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
   at 
 io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused by 
 GSSException: No valid credentials provided (Mechanism level: Failed to find 
 any Kerberos tgt)]
   at 
 com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
   at 
 org.apache.hadoop.hbase.security.SaslClientHandler.handlerAdded(SaslClientHandler.java:154)
   at 
 io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:486)
   ... 20 more
 Caused by: GSSException: No valid credentials provided (Mechanism level: 
 Failed to find any Kerberos tgt)
   at 
 sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
   at 
 sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
   at 
 sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
   at 
 sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
   at 
 sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
   at 
 sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
   at 
 

[jira] [Updated] (HBASE-13069) Thrift Http Server returns an error code of 500 instead of 401 when authentication fails

2015-02-19 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-13069:
---
   Resolution: Fixed
Fix Version/s: 1.1.0
   1.0.1
   2.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Srikanth for the patch.

Thanks to Andrew for the review.

 Thrift Http Server returns an error code of 500 instead of 401 when 
 authentication fails
 

 Key: HBASE-13069
 URL: https://issues.apache.org/jira/browse/HBASE-13069
 Project: HBase
  Issue Type: Bug
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Fix For: 2.0.0, 1.0.1, 1.1.0

 Attachments: HBASE-13069.patch, HBASE-13069.patch


 As per description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13053) Add support of Visibility Labels in PerformanceEvaluation

2015-02-19 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328384#comment-14328384
 ] 

Andrew Purtell commented on HBASE-13053:


Rather then add ad-hoc options to PE, can we do this the way we extended 
LoadTestTool? There we plug in different mutation / KV generators and let the 
tool specify which one it wants (or we could allow this to be a list), plus 
generator specific options. 

Consider in addition to labels we should have options for ACLs, cell TTLs, or 
whatever else comes down the line.

 Add support of Visibility Labels in PerformanceEvaluation
 -

 Key: HBASE-13053
 URL: https://issues.apache.org/jira/browse/HBASE-13053
 Project: HBase
  Issue Type: Improvement
Affects Versions: 1.0.0, 0.98.10.1
Reporter: Jerry He
Assignee: Jerry He
Priority: Minor
 Fix For: 2.0.0, 1.0.1, 0.98.11

 Attachments: HBASE-13053-0.98.patch, HBASE-13053-master.patch


 Add support of Visibility Labels in PerformanceEvaluation:
 During write operations, support adding a visibility expression to KVs.
 During read/scan operations, support using visibility authorization.
 Here is the usage:
 {noformat}
 Options:
 ...
 visibilityExp   Writes the visibility expression along with KVs. Use for 
 write commands. Visiblity labels need to pre-exist.
 visibilityAuth  Specify the visibility auths (comma separated labels) used in 
 read or scan. Visiblity labels need to pre-exist.
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13075) TableInputFormatBase spuriously warning about multiple initializeTable calls

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328399#comment-14328399
 ] 

Hudson commented on HBASE-13075:


SUCCESS: Integrated in HBase-1.0 #761 (See 
[https://builds.apache.org/job/HBase-1.0/761/])
HBASE-13075 TableInputFormatBase spuriously warning about multiple 
initializeTable calls (busbey: rev 28ea3e0197def97f900dc57884048c547ca94def)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableInputFormatBase.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.java


 TableInputFormatBase spuriously warning about multiple initializeTable calls
 

 Key: HBASE-13075
 URL: https://issues.apache.org/jira/browse/HBASE-13075
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 1.0.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Minor
 Fix For: 1.0.1, 1.1.0, 2.2.0

 Attachments: HBASE-13075.1.patch.txt


 TableInputFormatBase incorrectly checks a local variable (that can't be null) 
 rather than the instance variable (which can be null) to see if it has been 
 called multiple times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12953) RegionServer is not functionally working with AysncRpcClient in secure mode

2015-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328451#comment-14328451
 ] 

Hadoop QA commented on HBASE-12953:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12699777/HBASE-12953_3%20%282%29.patch
  against master branch at commit 365054c110467d0628019761791281875631f4be.
  ATTACHMENT ID: 12699777

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.
{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestShell

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12918//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12918//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12918//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12918//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12918//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12918//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12918//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12918//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12918//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12918//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12918//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12918//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12918//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12918//console

This message is automatically generated.

 RegionServer is not functionally working with AysncRpcClient in secure mode
 ---

 Key: HBASE-12953
 URL: https://issues.apache.org/jira/browse/HBASE-12953
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0, 1.1.0
Reporter: Ashish Singhi
Assignee: zhangduo
Priority: Critical
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-12953.patch, HBASE-12953_1.patch, 
 HBASE-12953_2.patch, HBASE-12953_2.patch, HBASE-12953_2.patch, 
 HBASE-12953_2.patch, HBASE-12953_2.patch, HBASE-12953_3 (2).patch, 
 HBASE-12953_3 (2).patch, HBASE-12953_3.patch, HBASE-12953_3.patch, 
 HBASE-12953_3.patch, testcase.patch


 HBase version 2.0.0
 Default value for {{hbase.rpc.client.impl}} is set to AsyncRpcClient.
 When trying to install HBase with Kerberos, RegionServer is not working 
 functionally.
 The following log is logged in its log file
 {noformat}
 2015-02-02 14:59:05,407 WARN  [AsyncRpcChannel-pool1-t1] 
 

[jira] [Commented] (HBASE-13069) Thrift Http Server returns an error code of 500 instead of 401 when authentication fails

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328481#comment-14328481
 ] 

Hudson commented on HBASE-13069:


FAILURE: Integrated in HBase-TRUNK #6152 (See 
[https://builds.apache.org/job/HBase-TRUNK/6152/])
HBASE-13069 Thrift Http Server returns an error code of 500 instead of 401 when 
authentication fails (Srikanth Srungarapu) (tedyu: rev 
03d8918142681d4c8abe40e8c8fb32307756d8a8)
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftHttpServlet.java


 Thrift Http Server returns an error code of 500 instead of 401 when 
 authentication fails
 

 Key: HBASE-13069
 URL: https://issues.apache.org/jira/browse/HBASE-13069
 Project: HBase
  Issue Type: Bug
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Fix For: 2.0.0, 1.0.1, 1.1.0

 Attachments: HBASE-13069.patch, HBASE-13069.patch


 As per description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13077) BoundedCompletionService doesn't pass trace info to server

2015-02-19 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong updated HBASE-13077:
--
Attachment: HBASE-13077.patch

This patch is for 1.0. Thanks.

 BoundedCompletionService doesn't pass trace info to server
 --

 Key: HBASE-13077
 URL: https://issues.apache.org/jira/browse/HBASE-13077
 Project: HBase
  Issue Type: Bug
  Components: hbase
Affects Versions: 1.0.0, 2.0.0, 1.1.0
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
 Attachments: HBASE-13077.patch


 Today [~ndimiduk]  I found that BoundedCompletionService doesn't pass htrace 
 info to server.
 [~enis] FYI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13069) Thrift Http Server returns an error code of 500 instead of 401 when authentication fails

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328490#comment-14328490
 ] 

Hudson commented on HBASE-13069:


FAILURE: Integrated in HBase-1.1 #200 (See 
[https://builds.apache.org/job/HBase-1.1/200/])
HBASE-13069 Thrift Http Server returns an error code of 500 instead of 401 when 
authentication fails (Srikanth Srungarapu) (tedyu: rev 
0a21b1e226f8e13e9cde34f70a4dd2459128f00e)
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftHttpServlet.java


 Thrift Http Server returns an error code of 500 instead of 401 when 
 authentication fails
 

 Key: HBASE-13069
 URL: https://issues.apache.org/jira/browse/HBASE-13069
 Project: HBase
  Issue Type: Bug
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Fix For: 2.0.0, 1.0.1, 1.1.0

 Attachments: HBASE-13069.patch, HBASE-13069.patch


 As per description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13077) BoundedCompletionService doesn't pass trace info to server

2015-02-19 Thread Jeffrey Zhong (JIRA)
Jeffrey Zhong created HBASE-13077:
-

 Summary: BoundedCompletionService doesn't pass trace info to server
 Key: HBASE-13077
 URL: https://issues.apache.org/jira/browse/HBASE-13077
 Project: HBase
  Issue Type: Bug
  Components: hbase
Affects Versions: 1.0.0, 2.0.0, 1.1.0
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong


Today [~ndimiduk]  I found that BoundedCompletionService doesn't pass htrace 
info to server.

[~enis] FYI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13077) BoundedCompletionService doesn't pass trace info to server

2015-02-19 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong updated HBASE-13077:
--
Status: Patch Available  (was: Open)

 BoundedCompletionService doesn't pass trace info to server
 --

 Key: HBASE-13077
 URL: https://issues.apache.org/jira/browse/HBASE-13077
 Project: HBase
  Issue Type: Bug
  Components: hbase
Affects Versions: 1.0.0, 2.0.0, 1.1.0
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
 Attachments: HBASE-13077.patch


 Today [~ndimiduk]  I found that BoundedCompletionService doesn't pass htrace 
 info to server.
 [~enis] FYI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12948) Calling Increment#addColumn on the same column multiple times produces wrong result

2015-02-19 Thread hongyu bi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327686#comment-14327686
 ] 

hongyu bi commented on HBASE-12948:
---

thanks ted,andrew and stack :)
I opened HBASE-13073 for API issues.

 Calling Increment#addColumn on the same column multiple times produces wrong 
 result 
 

 Key: HBASE-12948
 URL: https://issues.apache.org/jira/browse/HBASE-12948
 Project: HBase
  Issue Type: Bug
  Components: Client, regionserver
Reporter: hongyu bi
Assignee: hongyu bi
Priority: Critical
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: 12948-0.98.txt, 12948-v2.patch, 12948-v2.patch, 
 HBASE-12948-0.99.2-v1.patch, HBASE-12948-v0.patch, HBASE-12948.patch


 Case:
 Initially get('row1'):
 rowkey=row1 value=1
 run:
 Increment increment = new Increment(Bytes.toBytes(row1));
 for (int i = 0; i  N; i++) {
 increment.addColumn(Bytes.toBytes(cf), Bytes.toBytes(c), 1)
 }
 hobi.increment(increment);
 get('row1'):
 if N=1 then result is 2 else if N1 the result will always be 1
 Cause:
 https://issues.apache.org/jira/browse/HBASE-7114 let increment extent 
 mutation which change familyMap from NavigableMap to List, so from client 
 side, we can buffer many edits on the same column;
 However, HRegion#increment use idx to iterate the get's results, here 
 results.sizefamily.value().size if N1,so the latter edits on the same 
 column won't match the condition {idx  results.size()  
 CellUtil.matchingQualifier(results.get(idx), kv) }, meantime the edits share 
 the same mvccVersion ,so this case happen.
 Fix:
 according to the put/delete#add on the same column behaviour ,
 fix from server side: process last edit wins on the same column inside 
 HRegion#increment to maintenance  HBASE-7114's extension and keep the same 
 result from 0.94.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13072) BucketCache.evictBlock returns true if block does not exist

2015-02-19 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-13072:
---
Summary: BucketCache.evictBlock returns true if block does not exist  (was: 
BucketCache.evictBlock returns true if block not exists)

 BucketCache.evictBlock returns true if block does not exist
 ---

 Key: HBASE-13072
 URL: https://issues.apache.org/jira/browse/HBASE-13072
 Project: HBase
  Issue Type: Bug
  Components: BlockCache
Affects Versions: 1.0.0, 2.0.0, 0.98.10, 1.1.0
Reporter: zhangduo
Assignee: zhangduo
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-13072.patch


 The comment of BlockCache.evictBlock says 'true if block existed and was 
 evicted, false if not' but BucketCache does not follow it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13072) BucketCache.evictBlock returns true if block does not exist

2015-02-19 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327693#comment-14327693
 ] 

Ted Yu commented on HBASE-13072:


Test failure was not related to patch.

Ping [~enis], [~apurtell].

 BucketCache.evictBlock returns true if block does not exist
 ---

 Key: HBASE-13072
 URL: https://issues.apache.org/jira/browse/HBASE-13072
 Project: HBase
  Issue Type: Bug
  Components: BlockCache
Affects Versions: 1.0.0, 2.0.0, 0.98.10, 1.1.0
Reporter: zhangduo
Assignee: zhangduo
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-13072.patch


 The comment of BlockCache.evictBlock says 'true if block existed and was 
 evicted, false if not' but BucketCache does not follow it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13073) refactor mutation's familyMap in case of multi mutation on same column

2015-02-19 Thread hongyu bi (JIRA)
hongyu bi created HBASE-13073:
-

 Summary: refactor mutation's familyMap in case of multi mutation 
on same column
 Key: HBASE-13073
 URL: https://issues.apache.org/jira/browse/HBASE-13073
 Project: HBase
  Issue Type: Bug
  Components: Client
Reporter: hongyu bi
Assignee: hongyu bi


per HBASE-12948 it's found that we can do multi mutations on the same column 
for mutation object ,which will make no sense(even produce wrong results before 
HBASE-12948) but put more traffic to RS.So we want to refactor mutation's 
familyMap in case of multi mutation on same column .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13072) BucketCache.evictBlock returns true if block does not exist

2015-02-19 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327698#comment-14327698
 ] 

Ted Yu commented on HBASE-13072:


Integrated to branch-1 and master.

Thanks for the patch, Duo.

 BucketCache.evictBlock returns true if block does not exist
 ---

 Key: HBASE-13072
 URL: https://issues.apache.org/jira/browse/HBASE-13072
 Project: HBase
  Issue Type: Bug
  Components: BlockCache
Affects Versions: 1.0.0, 2.0.0, 0.98.10, 1.1.0
Reporter: zhangduo
Assignee: zhangduo
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-13072.patch


 The comment of BlockCache.evictBlock says 'true if block existed and was 
 evicted, false if not' but BucketCache does not follow it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13054) Provide more tracing information for locking/latching events.

2015-02-19 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated HBASE-13054:

Attachment: HBASE-13054.patch

Here is the patch just adding tracing in few places.
1) While getting a row lock possible lock contention under heavy load so just 
added trace info there.
2) trace info for block cache hit and scan request going through mem store.

Mostly there should not be much lock contention issues because shared locks are 
getting used in all most all the places. That's why not adding much tracing 
info. 

 Provide more tracing information for locking/latching events.
 -

 Key: HBASE-13054
 URL: https://issues.apache.org/jira/browse/HBASE-13054
 Project: HBase
  Issue Type: Improvement
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 2.0.0, 1.0.1, 1.1.0

 Attachments: HBASE-13054.patch


 Currently not much tracing information available for locking and latching 
 events like row level locking during do mini batch mutations, region level 
 locking during flush, close and so on. It will be better to add the trace 
 information for such events so that it will be useful for finding time spent 
 on locking and waiting time on locks while analyzing performance issues in 
 queries using trace information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11544) [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME

2015-02-19 Thread Jonathan Lawlor (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327722#comment-14327722
 ] 

Jonathan Lawlor commented on HBASE-11544:
-

[~lhofhansl] thanks for the comments

bq. Is the limit per cell or per row?

Sorry, let me be clear in what I mean when I say cell level and row level:

Partitioning at the row level (the current behavior):
Currently, the maxResultSize operates at the row level on the server. What I 
mean by this is that the result size limit is checked after each row's worth of 
cells is fetched. This presented the problem of running into OOME for large 
rows because a single row may be many times larger than the maxResultSize. 
Thus, when trying to retrieve all the cells for a single large row we would 
continue to traverse the row even when we had already passed the result size 
limit, and only realize we had exceeded the limit once the entire row's worth 
of cells had been retrieved.

Partitioning at the cell level (the new behavior):
The solution that has been implemented above moves the concept of maxResultSize 
down from the row level to the cell level. What this means is that the result 
size limit is checked after each cell/keyValue is fetched. This is nice because 
it provides a more precise size restriction on result size than the current 
solution. When the result size limit is reached while fetching the 
cells/keyValues for a particular row, that row will be returned as partial 
results that must be reconstructed client-side (i.e. the server will never 
contain the entire row's worth of cells in memory at once).

So when I said the server will only ever see partial results for very large 
rows, what I mean is: if the row is very large, the server will be returning 
partial results for that row in separate RPC responses, and thus, will never 
hold the entire row in memory but rather parts of it at different points in 
time.

 [Ergonomics] hbase.client.scanner.caching is dogged and will try to return 
 batch even if it means OOME
 --

 Key: HBASE-11544
 URL: https://issues.apache.org/jira/browse/HBASE-11544
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: Jonathan Lawlor
Priority: Critical
  Labels: beginner
 Attachments: HBASE-11544-v1.patch


 Running some tests, I set hbase.client.scanner.caching=1000.  Dataset has 
 large cells.  I kept OOME'ing.
 Serverside, we should measure how much we've accumulated and return to the 
 client whatever we've gathered once we pass out a certain size threshold 
 rather than keep accumulating till we OOME.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13072) BucketCache.evictBlock returns true if block does not exist

2015-02-19 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327784#comment-14327784
 ] 

Andrew Purtell commented on HBASE-13072:


+1

 BucketCache.evictBlock returns true if block does not exist
 ---

 Key: HBASE-13072
 URL: https://issues.apache.org/jira/browse/HBASE-13072
 Project: HBase
  Issue Type: Bug
  Components: BlockCache
Affects Versions: 1.0.0, 2.0.0, 0.98.10, 1.1.0
Reporter: zhangduo
Assignee: zhangduo
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-13072.patch


 The comment of BlockCache.evictBlock says 'true if block existed and was 
 evicted, false if not' but BucketCache does not follow it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13072) BucketCache.evictBlock returns true if block does not exist

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327782#comment-14327782
 ] 

Hudson commented on HBASE-13072:


FAILURE: Integrated in HBase-1.1 #198 (See 
[https://builds.apache.org/job/HBase-1.1/198/])
HBASE-13072 BucketCache.evictBlock returns true if block does not exist (Duo 
Zhang) (tedyu: rev 6b44b734040ea7e6c9bdcf04ee61a738989adcb9)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java


 BucketCache.evictBlock returns true if block does not exist
 ---

 Key: HBASE-13072
 URL: https://issues.apache.org/jira/browse/HBASE-13072
 Project: HBase
  Issue Type: Bug
  Components: BlockCache
Affects Versions: 1.0.0, 2.0.0, 0.98.10, 1.1.0
Reporter: zhangduo
Assignee: zhangduo
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-13072.patch


 The comment of BlockCache.evictBlock says 'true if block existed and was 
 evicted, false if not' but BucketCache does not follow it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13054) Provide more tracing information for locking/latching events.

2015-02-19 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327788#comment-14327788
 ] 

Andrew Purtell commented on HBASE-13054:


The HRegion changes don't wrap the scope and span allocations in blocks 
conditional on Trace.isTracing(). We should have this there too? 
Otherwise patch lgtm.



 Provide more tracing information for locking/latching events.
 -

 Key: HBASE-13054
 URL: https://issues.apache.org/jira/browse/HBASE-13054
 Project: HBase
  Issue Type: Improvement
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 2.0.0, 1.0.1, 1.1.0

 Attachments: HBASE-13054.patch


 Currently not much tracing information available for locking and latching 
 events like row level locking during do mini batch mutations, region level 
 locking during flush, close and so on. It will be better to add the trace 
 information for such events so that it will be useful for finding time spent 
 on locking and waiting time on locks while analyzing performance issues in 
 queries using trace information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13069) Thrift Http Server returning an error code of 500 instead of 401 when authentication fails.

2015-02-19 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327789#comment-14327789
 ] 

Andrew Purtell commented on HBASE-13069:


+1

 Thrift Http Server returning an error code of 500 instead of 401 when 
 authentication fails.
 ---

 Key: HBASE-13069
 URL: https://issues.apache.org/jira/browse/HBASE-13069
 Project: HBase
  Issue Type: Bug
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Attachments: HBASE-13069.patch, HBASE-13069.patch


 As per description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13071) Hbase Streaming Scan Feature

2015-02-19 Thread Jonathan Lawlor (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327808#comment-14327808
 ] 

Jonathan Lawlor commented on HBASE-13071:
-

This sounds like a great feature. 

There is some discussion over in HBASE-11544 about the inefficiency of the way 
that the current (synchronous) scanners use the network (which lead to 
HBASE-12994) as well as discussion about how to move Scan RPC's into the realm 
of streaming. This seems like it would address both of those issues and should 
provide some nice performance gains. 

Looking forward to this

 Hbase Streaming Scan Feature
 

 Key: HBASE-13071
 URL: https://issues.apache.org/jira/browse/HBASE-13071
 Project: HBase
  Issue Type: New Feature
Reporter: Eshcar Hillel
 Attachments: HBaseStreamingScanDesign.pdf


 A scan operation iterates over all rows of a table or a subrange of the 
 table. The synchronous nature in which the data is served at the client side 
 hinders the speed the application traverses the data: it increases the 
 overall processing time, and may cause a great variance in the times the 
 application waits for the next piece of data.
 The scanner next() method at the client side invokes an RPC to the 
 regionserver and then stores the results in a cache. The application can 
 specify how many rows will be transmitted per RPC; by default this is set to 
 100 rows. 
 The cache can be considered as a producer-consumer queue, where the hbase 
 client pushes the data to the queue and the application consumes it. 
 Currently this queue is synchronous, i.e., blocking. More specifically, when 
 the application consumed all the data from the cache --- so the cache is 
 empty --- the hbase client retrieves additional data from the server and 
 re-fills the cache with new data. During this time the application is blocked.
 Under the assumption that the application processing time can be balanced by 
 the time it takes to retrieve the data, an asynchronous approach can reduce 
 the time the application is waiting for data.
 We attach a design document.
 We also have a patch that is based on a private branch, and some evaluation 
 results of this code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12953) RegionServer is not functionally working with AysncRpcClient in secure mode

2015-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328279#comment-14328279
 ] 

Hadoop QA commented on HBASE-12953:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12699743/HBASE-12953_3%20%282%29.patch
  against master branch at commit 18402cc850b143bc6f88d90e62c42b9ef4131ca6.
  ATTACHMENT ID: 12699743

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.
{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestShell

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12916//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12916//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12916//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12916//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12916//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12916//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12916//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12916//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12916//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12916//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12916//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12916//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12916//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12916//console

This message is automatically generated.

 RegionServer is not functionally working with AysncRpcClient in secure mode
 ---

 Key: HBASE-12953
 URL: https://issues.apache.org/jira/browse/HBASE-12953
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0, 1.1.0
Reporter: Ashish Singhi
Assignee: zhangduo
Priority: Critical
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-12953.patch, HBASE-12953_1.patch, 
 HBASE-12953_2.patch, HBASE-12953_2.patch, HBASE-12953_2.patch, 
 HBASE-12953_2.patch, HBASE-12953_2.patch, HBASE-12953_3 (2).patch, 
 HBASE-12953_3.patch, HBASE-12953_3.patch, HBASE-12953_3.patch, testcase.patch


 HBase version 2.0.0
 Default value for {{hbase.rpc.client.impl}} is set to AsyncRpcClient.
 When trying to install HBase with Kerberos, RegionServer is not working 
 functionally.
 The following log is logged in its log file
 {noformat}
 2015-02-02 14:59:05,407 WARN  [AsyncRpcChannel-pool1-t1] 
 channel.DefaultChannelPipeline: An exceptionCaught() event was fired, and it 
 reached at the tail of the pipeline. It usually means the 

[jira] [Updated] (HBASE-13072) BucketCache.evictBlock returns true if block does not exist

2015-02-19 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-13072:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

 BucketCache.evictBlock returns true if block does not exist
 ---

 Key: HBASE-13072
 URL: https://issues.apache.org/jira/browse/HBASE-13072
 Project: HBase
  Issue Type: Bug
  Components: BlockCache
Affects Versions: 1.0.0, 2.0.0, 0.98.10, 1.1.0
Reporter: zhangduo
Assignee: zhangduo
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-13072-0.98.patch, HBASE-13072.patch


 The comment of BlockCache.evictBlock says 'true if block existed and was 
 evicted, false if not' but BucketCache does not follow it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13053) Add support of Visibility Labels in PerformanceEvaluation

2015-02-19 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328476#comment-14328476
 ] 

Jerry He commented on HBASE-13053:
--

Hi, [~apurtell]

Let me explore that direction. Thanks for the suggestion.
Will report back here.

 Add support of Visibility Labels in PerformanceEvaluation
 -

 Key: HBASE-13053
 URL: https://issues.apache.org/jira/browse/HBASE-13053
 Project: HBase
  Issue Type: Improvement
Affects Versions: 1.0.0, 0.98.10.1
Reporter: Jerry He
Assignee: Jerry He
Priority: Minor
 Fix For: 2.0.0, 1.0.1, 0.98.11

 Attachments: HBASE-13053-0.98.patch, HBASE-13053-master.patch


 Add support of Visibility Labels in PerformanceEvaluation:
 During write operations, support adding a visibility expression to KVs.
 During read/scan operations, support using visibility authorization.
 Here is the usage:
 {noformat}
 Options:
 ...
 visibilityExp   Writes the visibility expression along with KVs. Use for 
 write commands. Visiblity labels need to pre-exist.
 visibilityAuth  Specify the visibility auths (comma separated labels) used in 
 read or scan. Visiblity labels need to pre-exist.
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12953) RegionServer is not functionally working with AysncRpcClient in secure mode

2015-02-19 Thread zhangduo (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328494#comment-14328494
 ] 

zhangduo commented on HBASE-12953:
--

[~stack] I'm not sure right now. I ran TestMasterObserver locally and it passed.
The log file of TestMasterObserver in PreCommit-Build is too large and I have 
not found any useful information now.

[~octo47] Do you mean HBASE-13076 cause TestMasterObserver to fail?

Thanks.

 RegionServer is not functionally working with AysncRpcClient in secure mode
 ---

 Key: HBASE-12953
 URL: https://issues.apache.org/jira/browse/HBASE-12953
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0, 1.1.0
Reporter: Ashish Singhi
Assignee: zhangduo
Priority: Critical
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-12953.patch, HBASE-12953_1.patch, 
 HBASE-12953_2.patch, HBASE-12953_2.patch, HBASE-12953_2.patch, 
 HBASE-12953_2.patch, HBASE-12953_2.patch, HBASE-12953_3 (2).patch, 
 HBASE-12953_3 (2).patch, HBASE-12953_3.patch, HBASE-12953_3.patch, 
 HBASE-12953_3.patch, testcase.patch


 HBase version 2.0.0
 Default value for {{hbase.rpc.client.impl}} is set to AsyncRpcClient.
 When trying to install HBase with Kerberos, RegionServer is not working 
 functionally.
 The following log is logged in its log file
 {noformat}
 2015-02-02 14:59:05,407 WARN  [AsyncRpcChannel-pool1-t1] 
 channel.DefaultChannelPipeline: An exceptionCaught() event was fired, and it 
 reached at the tail of the pipeline. It usually means the last handler in the 
 pipeline did not handle the exception.
 io.netty.channel.ChannelPipelineException: 
 org.apache.hadoop.hbase.security.SaslClientHandler.handlerAdded() has thrown 
 an exception; removed.
   at 
 io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:499)
   at 
 io.netty.channel.DefaultChannelPipeline.callHandlerAdded(DefaultChannelPipeline.java:481)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst0(DefaultChannelPipeline.java:114)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:97)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:235)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:214)
   at 
 org.apache.hadoop.hbase.ipc.AsyncRpcChannel$2.operationComplete(AsyncRpcChannel.java:194)
   at 
 org.apache.hadoop.hbase.ipc.AsyncRpcChannel$2.operationComplete(AsyncRpcChannel.java:157)
   at 
 io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
   at 
 io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
   at 
 io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
   at 
 io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
   at 
 io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
   at 
 io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:253)
   at 
 io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:288)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
   at 
 io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused by 
 GSSException: No valid credentials provided (Mechanism level: Failed to find 
 any Kerberos tgt)]
   at 
 com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
   at 
 org.apache.hadoop.hbase.security.SaslClientHandler.handlerAdded(SaslClientHandler.java:154)
   at 
 io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:486)
   ... 20 more
 Caused by: GSSException: No valid credentials provided (Mechanism level: 
 Failed to find any Kerberos tgt)
   at 
 sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
   at 
 sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
   at 
 sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
   at 
 sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
   at 
 

[jira] [Commented] (HBASE-13075) TableInputFormatBase spuriously warning about multiple initializeTable calls

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328392#comment-14328392
 ] 

Hudson commented on HBASE-13075:


FAILURE: Integrated in HBase-1.1 #199 (See 
[https://builds.apache.org/job/HBase-1.1/199/])
HBASE-13075 TableInputFormatBase spuriously warning about multiple 
initializeTable calls (busbey: rev 49ae4ab672676b387d67c4d2ceaad707358d7cc0)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableInputFormatBase.java


 TableInputFormatBase spuriously warning about multiple initializeTable calls
 

 Key: HBASE-13075
 URL: https://issues.apache.org/jira/browse/HBASE-13075
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 1.0.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Minor
 Fix For: 1.0.1, 1.1.0, 2.2.0

 Attachments: HBASE-13075.1.patch.txt


 TableInputFormatBase incorrectly checks a local variable (that can't be null) 
 rather than the instance variable (which can be null) to see if it has been 
 called multiple times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13077) BoundedCompletionService doesn't pass trace info to server

2015-02-19 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong updated HBASE-13077:
--
Description: 
Today [~ndimiduk]  I found that BoundedCompletionService doesn't pass htrace 
info to server. This issue causes scan doesn't pass trace info to server.

[~enis] FYI.

  was:
Today [~ndimiduk]  I found that BoundedCompletionService doesn't pass htrace 
info to server.

[~enis] FYI.


 BoundedCompletionService doesn't pass trace info to server
 --

 Key: HBASE-13077
 URL: https://issues.apache.org/jira/browse/HBASE-13077
 Project: HBase
  Issue Type: Bug
  Components: hbase
Affects Versions: 1.0.0, 2.0.0, 1.1.0
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
 Attachments: HBASE-13077.patch


 Today [~ndimiduk]  I found that BoundedCompletionService doesn't pass htrace 
 info to server. This issue causes scan doesn't pass trace info to server.
 [~enis] FYI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13072) BucketCache.evictBlock returns true if block does not exist

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328268#comment-14328268
 ] 

Hudson commented on HBASE-13072:


SUCCESS: Integrated in HBase-1.0 #760 (See 
[https://builds.apache.org/job/HBase-1.0/760/])
HBASE-13072 BucketCache.evictBlock returns true if block does not exist (Duo 
Zhang) (tedyu: rev 30a646f77d8c109c9eb382a31f4488b251e43154)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java


 BucketCache.evictBlock returns true if block does not exist
 ---

 Key: HBASE-13072
 URL: https://issues.apache.org/jira/browse/HBASE-13072
 Project: HBase
  Issue Type: Bug
  Components: BlockCache
Affects Versions: 1.0.0, 2.0.0, 0.98.10, 1.1.0
Reporter: zhangduo
Assignee: zhangduo
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-13072.patch


 The comment of BlockCache.evictBlock says 'true if block existed and was 
 evicted, false if not' but BucketCache does not follow it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13075) TableInputFormatBase spuriously warning about multiple initializeTable calls

2015-02-19 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13075:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

pushed to 1.0.1+. thanks for the review Ted.

 TableInputFormatBase spuriously warning about multiple initializeTable calls
 

 Key: HBASE-13075
 URL: https://issues.apache.org/jira/browse/HBASE-13075
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 1.0.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Minor
 Fix For: 1.0.1, 1.1.0, 2.2.0

 Attachments: HBASE-13075.1.patch.txt


 TableInputFormatBase incorrectly checks a local variable (that can't be null) 
 rather than the instance variable (which can be null) to see if it has been 
 called multiple times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >