[jira] [Commented] (HBASE-11915) Document and test 0.94 - 1.0.0 update

2014-10-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181072#comment-14181072
 ] 

Hudson commented on HBASE-11915:


FAILURE: Integrated in HBase-1.0 #347 (See 
[https://builds.apache.org/job/HBase-1.0/347/])
HBASE-11915 Document and test 0.94 - 1.0.0 update -- ADDENDUM (stack: rev 
46e4bffc2c7209b8d6620bdd187d9b43200770b8)
* hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java


 Document and test 0.94 - 1.0.0 update
 --

 Key: HBASE-11915
 URL: https://issues.apache.org/jira/browse/HBASE-11915
 Project: HBase
  Issue Type: Sub-task
Reporter: Enis Soztutar
Assignee: stack
Priority: Critical
 Fix For: 0.99.2

 Attachments: 11915.addendum.txt, 11915.txt, upgrade.txt


 We explicitly did not remove some of the upgrade related stuff in branch-1 
 for the possibility of supporting 0.94 - 1.0, similar to 0.94 - 0.98 
 support. 
 We should document, and test this support before 1.0 comes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11915) Document and test 0.94 - 1.0.0 update

2014-10-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181081#comment-14181081
 ] 

Hudson commented on HBASE-11915:


FAILURE: Integrated in HBase-0.98 #628 (See 
[https://builds.apache.org/job/HBase-0.98/628/])
HBASE-11915 Document and test 0.94 - 1.0.0 update -- ADDENDUM (stack: rev 
046c4ce62da624f1a98a78e3fcd5204a7a420585)
* hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java


 Document and test 0.94 - 1.0.0 update
 --

 Key: HBASE-11915
 URL: https://issues.apache.org/jira/browse/HBASE-11915
 Project: HBase
  Issue Type: Sub-task
Reporter: Enis Soztutar
Assignee: stack
Priority: Critical
 Fix For: 0.99.2

 Attachments: 11915.addendum.txt, 11915.txt, upgrade.txt


 We explicitly did not remove some of the upgrade related stuff in branch-1 
 for the possibility of supporting 0.94 - 1.0, similar to 0.94 - 0.98 
 support. 
 We should document, and test this support before 1.0 comes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11915) Document and test 0.94 - 1.0.0 update

2014-10-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181087#comment-14181087
 ] 

Hudson commented on HBASE-11915:


SUCCESS: Integrated in HBase-TRUNK #5693 (See 
[https://builds.apache.org/job/HBase-TRUNK/5693/])
HBASE-11915 Document and test 0.94 - 1.0.0 update -- ADDENDUM (stack: rev 
96f84594eee58b4e9a9347541baa3343a4ed3b97)
* hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java


 Document and test 0.94 - 1.0.0 update
 --

 Key: HBASE-11915
 URL: https://issues.apache.org/jira/browse/HBASE-11915
 Project: HBase
  Issue Type: Sub-task
Reporter: Enis Soztutar
Assignee: stack
Priority: Critical
 Fix For: 0.99.2

 Attachments: 11915.addendum.txt, 11915.txt, upgrade.txt


 We explicitly did not remove some of the upgrade related stuff in branch-1 
 for the possibility of supporting 0.94 - 1.0, similar to 0.94 - 0.98 
 support. 
 We should document, and test this support before 1.0 comes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12325) Add Utility to remove snapshot from a directory

2014-10-23 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-12325:

Attachment: DeleteRemoteSnapshotTool.java

I had a tool that I wrote long time ago, I haven't checked if it work now but 
it should.

anyway, you are using export in a wrong way :)
The initial design was exporting from one hbase cluster to another, not from 
hbase to a backup disk. mainly because you need someone that takes care of 
cleaning unused file.

so in case you want to export to disk you can:
 * Create one folder per snapshot, which allows you to use rm -rf to drop the 
snapshot, but you don't get delta updates.
 * Create a month-x folder and drop all the snapshot of month x, which allows 
you to get the delta updates, and allows you drop all the snapshot of month-x 
with a simple rm -rf

...or, you can use the tool. but the tool requires coordination.
You can not run both the tools and export snapshot together, otherwise the tool 
may remove the files in progress.
so, in my opinion this tool doesn't belong to hbase-core, because once your 
stuff are not under the hbase control it is your responsibility do the 
coordination (so, don't try to propose a zk-lock taken by both the tool and 
export or similar).

 Add Utility to remove snapshot from a directory
 ---

 Key: HBASE-12325
 URL: https://issues.apache.org/jira/browse/HBASE-12325
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Attachments: DeleteRemoteSnapshotTool.java


 If there are several snapshots exported to a single directory, it's nice to 
 be able to remove the oldest one. Since snapshots in the same directory can 
 share files it's not as simple as just removing all files in a snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12327) MetricsHBaseServerSourceFactory#createContextName has wrong conditions

2014-10-23 Thread Sanghyun Yun (JIRA)
Sanghyun Yun created HBASE-12327:


 Summary: MetricsHBaseServerSourceFactory#createContextName has 
wrong conditions
 Key: HBASE-12327
 URL: https://issues.apache.org/jira/browse/HBASE-12327
 Project: HBase
  Issue Type: Bug
Reporter: Sanghyun Yun


MetricsHBaseServerSourceFactory#createContextName has wrong conditions.

It checks serverName contains HMaster or HRegion.

{code:title=MetricsHBaseServerSourceFactory.java}
...
  protected static String createContextName(String serverName) {
if (serverName.contains(HMaster)) {
  return Master;
} else if (serverName.contains(HRegion)) {
  return RegionServer;
}
return IPC;
  }
...
{code}

But, passed serverName actually contains master or regionserver by 
HMaster#getProcessName and HRegionServer#getProcessName.

{code:title=HMaster.java}
...
  // MASTER is name of the webapp and the attribute name used stuffing this
  //instance into web context.
  public static final String MASTER = master;
...
  protected String getProcessName() {
return MASTER;
  }
...
{code}

{code:title=HRegionServer.java}
...
  /** region server process name */
  public static final String REGIONSERVER = regionserver;
...
  protected String getProcessName() {
return REGIONSERVER;
  }
...
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11915) Document and test 0.94 - 1.0.0 update

2014-10-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181103#comment-14181103
 ] 

Hudson commented on HBASE-11915:


SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #599 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/599/])
HBASE-11915 Document and test 0.94 - 1.0.0 update -- ADDENDUM (stack: rev 
046c4ce62da624f1a98a78e3fcd5204a7a420585)
* hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java


 Document and test 0.94 - 1.0.0 update
 --

 Key: HBASE-11915
 URL: https://issues.apache.org/jira/browse/HBASE-11915
 Project: HBase
  Issue Type: Sub-task
Reporter: Enis Soztutar
Assignee: stack
Priority: Critical
 Fix For: 0.99.2

 Attachments: 11915.addendum.txt, 11915.txt, upgrade.txt


 We explicitly did not remove some of the upgrade related stuff in branch-1 
 for the possibility of supporting 0.94 - 1.0, similar to 0.94 - 0.98 
 support. 
 We should document, and test this support before 1.0 comes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12327) MetricsHBaseServerSourceFactory#createContextName has wrong conditions

2014-10-23 Thread Sanghyun Yun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanghyun Yun updated HBASE-12327:
-
Attachment: HBASE-12327.patch

So, I changed some conditions.

 MetricsHBaseServerSourceFactory#createContextName has wrong conditions
 --

 Key: HBASE-12327
 URL: https://issues.apache.org/jira/browse/HBASE-12327
 Project: HBase
  Issue Type: Bug
Reporter: Sanghyun Yun
 Attachments: HBASE-12327.patch


 MetricsHBaseServerSourceFactory#createContextName has wrong conditions.
 It checks serverName contains HMaster or HRegion.
 {code:title=MetricsHBaseServerSourceFactory.java}
 ...
   protected static String createContextName(String serverName) {
 if (serverName.contains(HMaster)) {
   return Master;
 } else if (serverName.contains(HRegion)) {
   return RegionServer;
 }
 return IPC;
   }
 ...
 {code}
 But, passed serverName actually contains master or regionserver by 
 HMaster#getProcessName and HRegionServer#getProcessName.
 {code:title=HMaster.java}
 ...
   // MASTER is name of the webapp and the attribute name used stuffing this
   //instance into web context.
   public static final String MASTER = master;
 ...
   protected String getProcessName() {
 return MASTER;
   }
 ...
 {code}
 {code:title=HRegionServer.java}
 ...
   /** region server process name */
   public static final String REGIONSERVER = regionserver;
 ...
   protected String getProcessName() {
 return REGIONSERVER;
   }
 ...
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12327) MetricsHBaseServerSourceFactory#createContextName has wrong conditions

2014-10-23 Thread Sanghyun Yun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanghyun Yun updated HBASE-12327:
-
Status: Patch Available  (was: Open)

 MetricsHBaseServerSourceFactory#createContextName has wrong conditions
 --

 Key: HBASE-12327
 URL: https://issues.apache.org/jira/browse/HBASE-12327
 Project: HBase
  Issue Type: Bug
Reporter: Sanghyun Yun
 Attachments: HBASE-12327.patch


 MetricsHBaseServerSourceFactory#createContextName has wrong conditions.
 It checks serverName contains HMaster or HRegion.
 {code:title=MetricsHBaseServerSourceFactory.java}
 ...
   protected static String createContextName(String serverName) {
 if (serverName.contains(HMaster)) {
   return Master;
 } else if (serverName.contains(HRegion)) {
   return RegionServer;
 }
 return IPC;
   }
 ...
 {code}
 But, passed serverName actually contains master or regionserver by 
 HMaster#getProcessName and HRegionServer#getProcessName.
 {code:title=HMaster.java}
 ...
   // MASTER is name of the webapp and the attribute name used stuffing this
   //instance into web context.
   public static final String MASTER = master;
 ...
   protected String getProcessName() {
 return MASTER;
   }
 ...
 {code}
 {code:title=HRegionServer.java}
 ...
   /** region server process name */
   public static final String REGIONSERVER = regionserver;
 ...
   protected String getProcessName() {
 return REGIONSERVER;
   }
 ...
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12328) Need to separate JvmMetrics for Master and RegionServer

2014-10-23 Thread Sanghyun Yun (JIRA)
Sanghyun Yun created HBASE-12328:


 Summary: Need to separate JvmMetrics for Master and RegionServer
 Key: HBASE-12328
 URL: https://issues.apache.org/jira/browse/HBASE-12328
 Project: HBase
  Issue Type: Improvement
Reporter: Sanghyun Yun
Priority: Minor


tag.ProcessName of JvmMetrics is IPC.
It is same both Master and RegionServer.

{code:title=HBase(Master and RegionServer)'s Metrics Dump}
...
name: Hadoop:service=HBase,name=JvmMetrics,
modelerType: JvmMetrics,
tag.Context: jvm,
tag.ProcessName: IPC,
tag.SessionId: ,
...
{code}

When I use HBase with Ganglia,
I wrote tagsForPrefix.jvm=ProcessName in hadoop-metrics2-hbase.properties.
{code:title=hadoop-metrics2-hbase.properties}
...
*.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
hbase.sink.ganglia.tagsForPrefix.jvm=ProcessName
...
{code}

But, Ganglia generate only one RRD file because tag.ProcessName is IPC both 
Master and Regionserver.

I think it need to separate JvmMetrics for Master and RegionServer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12328) Need to separate JvmMetrics for Master and RegionServer

2014-10-23 Thread Sanghyun Yun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanghyun Yun updated HBASE-12328:
-
Attachment: HBASE-12328.patch

I changed some code for create separated JvmMetrics.
Master's tag.ProcessName = Master
RegionServer's tag.ProcessName = RegionServer

 Need to separate JvmMetrics for Master and RegionServer
 ---

 Key: HBASE-12328
 URL: https://issues.apache.org/jira/browse/HBASE-12328
 Project: HBase
  Issue Type: Improvement
Reporter: Sanghyun Yun
Priority: Minor
 Attachments: HBASE-12328.patch


 tag.ProcessName of JvmMetrics is IPC.
 It is same both Master and RegionServer.
 {code:title=HBase(Master and RegionServer)'s Metrics Dump}
 ...
 name: Hadoop:service=HBase,name=JvmMetrics,
 modelerType: JvmMetrics,
 tag.Context: jvm,
 tag.ProcessName: IPC,
 tag.SessionId: ,
 ...
 {code}
 When I use HBase with Ganglia,
 I wrote tagsForPrefix.jvm=ProcessName in hadoop-metrics2-hbase.properties.
 {code:title=hadoop-metrics2-hbase.properties}
 ...
 *.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
 hbase.sink.ganglia.tagsForPrefix.jvm=ProcessName
 ...
 {code}
 But, Ganglia generate only one RRD file because tag.ProcessName is IPC both 
 Master and Regionserver.
 I think it need to separate JvmMetrics for Master and RegionServer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12328) Need to separate JvmMetrics for Master and RegionServer

2014-10-23 Thread Sanghyun Yun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanghyun Yun updated HBASE-12328:
-
Status: Patch Available  (was: Open)

 Need to separate JvmMetrics for Master and RegionServer
 ---

 Key: HBASE-12328
 URL: https://issues.apache.org/jira/browse/HBASE-12328
 Project: HBase
  Issue Type: Improvement
Reporter: Sanghyun Yun
Priority: Minor
 Attachments: HBASE-12328.patch


 tag.ProcessName of JvmMetrics is IPC.
 It is same both Master and RegionServer.
 {code:title=HBase(Master and RegionServer)'s Metrics Dump}
 ...
 name: Hadoop:service=HBase,name=JvmMetrics,
 modelerType: JvmMetrics,
 tag.Context: jvm,
 tag.ProcessName: IPC,
 tag.SessionId: ,
 ...
 {code}
 When I use HBase with Ganglia,
 I wrote tagsForPrefix.jvm=ProcessName in hadoop-metrics2-hbase.properties.
 {code:title=hadoop-metrics2-hbase.properties}
 ...
 *.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
 hbase.sink.ganglia.tagsForPrefix.jvm=ProcessName
 ...
 {code}
 But, Ganglia generate only one RRD file because tag.ProcessName is IPC both 
 Master and Regionserver.
 I think it need to separate JvmMetrics for Master and RegionServer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11683) Metrics for MOB

2014-10-23 Thread Li Jiajia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Jiajia updated HBASE-11683:
--
Attachment: HBASE-11683-V6.diff

Update the patch(HBASE-11683-V6) based on the Jon's comments.

 Metrics for MOB
 ---

 Key: HBASE-11683
 URL: https://issues.apache.org/jira/browse/HBASE-11683
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: 2.0.0
Reporter: Jonathan Hsieh
Assignee: Jingcheng Du
 Attachments: HBASE-11683-V2.diff, HBASE-11683-V3.diff, 
 HBASE-11683-V4.diff, HBASE-11683-V5.diff, HBASE-11683-V6.diff, 
 HBASE-11683.diff


 We need to make sure to capture metrics about mobs.
 Some basic ones include:
 # of mob writes
 # of mob reads
 # avg size of mob (?)
 # mob files
 # of mob compactions / sweeps



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12328) Need to separate JvmMetrics for Master and RegionServer

2014-10-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181154#comment-14181154
 ] 

Hadoop QA commented on HBASE-12328:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12676542/HBASE-12328.patch
  against trunk revision .
  ATTACHMENT ID: 12676542

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the trunk's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.ipc.TestRpcMetrics

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11452//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11452//artifact/patchprocess/patchReleaseAuditWarnings.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11452//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11452//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11452//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11452//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11452//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11452//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11452//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11452//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11452//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11452//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11452//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11452//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11452//console

This message is automatically generated.

 Need to separate JvmMetrics for Master and RegionServer
 ---

 Key: HBASE-12328
 URL: https://issues.apache.org/jira/browse/HBASE-12328
 Project: HBase
  Issue Type: Improvement
Reporter: Sanghyun Yun
Priority: Minor
 Attachments: HBASE-12328.patch


 tag.ProcessName of JvmMetrics is IPC.
 It is same both Master and RegionServer.
 {code:title=HBase(Master and RegionServer)'s Metrics Dump}
 ...
 name: Hadoop:service=HBase,name=JvmMetrics,
 modelerType: JvmMetrics,
 tag.Context: jvm,
 tag.ProcessName: IPC,
 tag.SessionId: ,
 ...
 {code}
 When I use HBase with Ganglia,
 I wrote tagsForPrefix.jvm=ProcessName in hadoop-metrics2-hbase.properties.
 {code:title=hadoop-metrics2-hbase.properties}
 ...
 *.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
 hbase.sink.ganglia.tagsForPrefix.jvm=ProcessName
 ...
 {code}
 But, Ganglia generate only one RRD file because tag.ProcessName is IPC both 
 Master and 

[jira] [Commented] (HBASE-11368) Multi-column family BulkLoad fails if compactions go on too long

2014-10-23 Thread Qiang Tian (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181168#comment-14181168
 ] 

Qiang Tian commented on HBASE-11368:


initial YCSB test:

Env:
---
hadoop 2.2.0
YCSB 1.0.4(Andrew's branch)
3 nodes, 1 master, 2 RS  //ignore cluster details since just to evaluate the 
new lock

Steps:
---
Followed Andrew's steps(see http://search-hadoop.com/m/DHED4hl7pC/)
the seed table has 3 CFs, pre-split to 20 regions
load 1 million rows to CF 'f1', using workloada
run 3 iterations for workloadc and workloada respectively. the parameter in 
each run:
bq. -p columnfamily=f1 -p operationcount=100 -s -threads 10


Results:
---
0.98.5:
workload c:
[READ], AverageLatency(us), 496.225811
[READ], AverageLatency(us), 510.206831
[READ], AverageLatency(us), 501.256123

workload a:
[READ], AverageLatency(us), 676.4527555821747
[READ], AverageLatency(us), 622.5544771452717
[READ], AverageLatency(us), 628.1365657163067


0.98.5+patch:
workload c:
[READ], AverageLatency(us), 536.334437
[READ], AverageLatency(us), 508.40
[READ], AverageLatency(us), 491.416182


workload a:
[READ], AverageLatency(us), 640.3625218319231
[READ], AverageLatency(us), 642.9719823488798
[READ], AverageLatency(us), 631.7491770928287

looks little performance penalty.

I also ran PE in the cluster, since the test table has only 1 CF, the new lock 
is actually not used. interestingly, with the patch the performance is even a 
bit better...

 Multi-column family BulkLoad fails if compactions go on too long
 

 Key: HBASE-11368
 URL: https://issues.apache.org/jira/browse/HBASE-11368
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: Qiang Tian
 Attachments: hbase-11368-0.98.5.patch


 Compactions take a read lock.  If a multi-column family region, before bulk 
 loading, we want to take a write lock on the region.  If the compaction takes 
 too long, the bulk load fails.
 Various recipes include:
 + Making smaller regions (lame)
 + [~victorunique] suggests major compacting just before bulk loading over in 
 HBASE-10882 as a work around.
 Does the compaction need a read lock for that long?  Does the bulk load need 
 a full write lock when multiple column families?  Can we fail more gracefully 
 at least?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12327) MetricsHBaseServerSourceFactory#createContextName has wrong conditions

2014-10-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181178#comment-14181178
 ] 

Hadoop QA commented on HBASE-12327:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12676540/HBASE-12327.patch
  against trunk revision .
  ATTACHMENT ID: 12676540

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the trunk's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11451//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11451//artifact/patchprocess/patchReleaseAuditWarnings.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11451//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11451//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11451//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11451//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11451//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11451//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11451//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11451//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11451//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11451//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11451//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11451//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11451//console

This message is automatically generated.

 MetricsHBaseServerSourceFactory#createContextName has wrong conditions
 --

 Key: HBASE-12327
 URL: https://issues.apache.org/jira/browse/HBASE-12327
 Project: HBase
  Issue Type: Bug
Reporter: Sanghyun Yun
 Attachments: HBASE-12327.patch


 MetricsHBaseServerSourceFactory#createContextName has wrong conditions.
 It checks serverName contains HMaster or HRegion.
 {code:title=MetricsHBaseServerSourceFactory.java}
 ...
   protected static String createContextName(String serverName) {
 if (serverName.contains(HMaster)) {
   return Master;
 } else if (serverName.contains(HRegion)) {
   return RegionServer;
 }
 return IPC;
   }
 ...
 {code}
 But, passed serverName actually contains master or regionserver by 
 HMaster#getProcessName and HRegionServer#getProcessName.
 {code:title=HMaster.java}
 ...
   // MASTER is name of the webapp and the attribute name used stuffing this
   //instance into web context.
   public static final String MASTER = master;
 ...
   

[jira] [Commented] (HBASE-12324) Improve compaction speed and process for immutable short lived datasets

2014-10-23 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181317#comment-14181317
 ] 

Sean Busbey commented on HBASE-12324:
-

In the case where we have  table-wide TTL, is there any reason not to just do a 
delete-only optimization in the general compaction policy?

We could add to the fixed trailer the newest timestamp of all the cells in the 
HFile.

 Improve compaction speed and process for immutable short lived datasets
 ---

 Key: HBASE-12324
 URL: https://issues.apache.org/jira/browse/HBASE-12324
 Project: HBase
  Issue Type: New Feature
  Components: Compaction
Affects Versions: 0.98.0, 0.96.0
Reporter: Sheetal Dolas

 We have seen multiple cases where HBase is used to store immutable data and 
 the data lives for short period of time (few days)
 On very high volume systems, major compactions become very costly and 
 slowdown ingestion rates.
 In all such use cases (immutable data, high write rate and moderate read 
 rates and shorter ttl), avoiding any compactions and just deleting old data 
 brings lot of performance benefits.
 We should have a compaction policy that can only delete/archive files older 
 than TTL and not compact any files.
 Also attaching a patch that can do so.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12327) MetricsHBaseServerSourceFactory#createContextName has wrong conditions

2014-10-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181373#comment-14181373
 ] 

Ted Yu commented on HBASE-12327:


Why are both HMaster and master checked ?

Thanks

 MetricsHBaseServerSourceFactory#createContextName has wrong conditions
 --

 Key: HBASE-12327
 URL: https://issues.apache.org/jira/browse/HBASE-12327
 Project: HBase
  Issue Type: Bug
Reporter: Sanghyun Yun
 Attachments: HBASE-12327.patch


 MetricsHBaseServerSourceFactory#createContextName has wrong conditions.
 It checks serverName contains HMaster or HRegion.
 {code:title=MetricsHBaseServerSourceFactory.java}
 ...
   protected static String createContextName(String serverName) {
 if (serverName.contains(HMaster)) {
   return Master;
 } else if (serverName.contains(HRegion)) {
   return RegionServer;
 }
 return IPC;
   }
 ...
 {code}
 But, passed serverName actually contains master or regionserver by 
 HMaster#getProcessName and HRegionServer#getProcessName.
 {code:title=HMaster.java}
 ...
   // MASTER is name of the webapp and the attribute name used stuffing this
   //instance into web context.
   public static final String MASTER = master;
 ...
   protected String getProcessName() {
 return MASTER;
   }
 ...
 {code}
 {code:title=HRegionServer.java}
 ...
   /** region server process name */
   public static final String REGIONSERVER = regionserver;
 ...
   protected String getProcessName() {
 return REGIONSERVER;
   }
 ...
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12287) Add retry runners to the most commonly failing tests

2014-10-23 Thread Alex Newman (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181426#comment-14181426
 ] 

Alex Newman commented on HBASE-12287:
-

3 out of 5 makes sense. I think it might also make sense if we were still 
alerted of it. And if we could track it some way. Ideas?


Sent from Alex Newman on a small glass box.
404.507.6749

On Wed, Oct 22, 2014 at 5:22 PM, Andrew Purtell (JIRA) j...@apache.org



 Add retry runners to the most commonly failing tests
 

 Key: HBASE-12287
 URL: https://issues.apache.org/jira/browse/HBASE-12287
 Project: HBase
  Issue Type: Sub-task
Reporter: Alex Newman
Assignee: Alex Newman
 Attachments: HBASE-12287.patch


 Many of our tests have nondeterministic behavior due to inter-test 
 interference. Usually restarting the test is enough to verify whether it is 
 test interference or a broken test. Lets use a retry runner which runs the 
 aftertest-before test and reruns the tests 10 times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12329) Table create with duplicate column family names quietly succeeds

2014-10-23 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-12329:
---

 Summary: Table create with duplicate column family names quietly 
succeeds
 Key: HBASE-12329
 URL: https://issues.apache.org/jira/browse/HBASE-12329
 Project: HBase
  Issue Type: Bug
  Components: Client, shell
Reporter: Sean Busbey
Priority: Minor


From the mailing list

{quote}
I was expecting that it is forbidden, **but** this call does not throw any
exception
{code}
String[] families = {cf, cf};
HTableDescriptor desc = new HTableDescriptor(name);
for (String cf : families) {
  HColumnDescriptor coldef = new HColumnDescriptor(cf);
  desc.addFamily(coldef);
}
try {
admin.createTable(desc);
} catch (TableExistsException e) {
throw new IOException(table \' + name + \' already exists);
}
{code}
{quote}

And Ted's follow up replicates in the shell
{quote}
hbase(main):001:0 create 't2', {NAME = 'f1'}, {NAME = 'f1'}

The table got created - with 1 column family:

hbase(main):002:0 describe 't2'
DESCRIPTION
   ENABLED
 't2', {NAME = 'f1', DATA_BLOCK_ENCODING = 'NONE', BLOOMFILTER = 'ROW',
REPLICATION_SCOPE = '0 true
 ', VERSIONS = '1', COMPRESSION = 'NONE', MIN_VERSIONS = '0', TTL =
'2147483647', KEEP_DELETED
 _CELLS = 'false', BLOCKSIZE = '65536', IN_MEMORY = 'false', BLOCKCACHE
= 'true'}
1 row(s) in 0.1000 seconds
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12329) Table create with duplicate column family names quietly succeeds

2014-10-23 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-12329:

Description: 
From the mailing list

{quote}
I was expecting that it is forbidden, **but** this call does not throw any
exception
{code}
String[] families = {cf, cf};
HTableDescriptor desc = new HTableDescriptor(name);
for (String cf : families) {
  HColumnDescriptor coldef = new HColumnDescriptor(cf);
  desc.addFamily(coldef);
}
try {
admin.createTable(desc);
} catch (TableExistsException e) {
throw new IOException(table \' + name + \' already exists);
}
{code}
{quote}

And Ted's follow up replicates in the shell
{code}
hbase(main):001:0 create 't2', {NAME = 'f1'}, {NAME = 'f1'}

The table got created - with 1 column family:

hbase(main):002:0 describe 't2'
DESCRIPTION
   ENABLED
 't2', {NAME = 'f1', DATA_BLOCK_ENCODING = 'NONE', BLOOMFILTER = 'ROW',
REPLICATION_SCOPE = '0 true
 ', VERSIONS = '1', COMPRESSION = 'NONE', MIN_VERSIONS = '0', TTL =
'2147483647', KEEP_DELETED
 _CELLS = 'false', BLOCKSIZE = '65536', IN_MEMORY = 'false', BLOCKCACHE
= 'true'}
1 row(s) in 0.1000 seconds
{code}

  was:
From the mailing list

{quote}
I was expecting that it is forbidden, **but** this call does not throw any
exception
{code}
String[] families = {cf, cf};
HTableDescriptor desc = new HTableDescriptor(name);
for (String cf : families) {
  HColumnDescriptor coldef = new HColumnDescriptor(cf);
  desc.addFamily(coldef);
}
try {
admin.createTable(desc);
} catch (TableExistsException e) {
throw new IOException(table \' + name + \' already exists);
}
{code}
{quote}

And Ted's follow up replicates in the shell
{quote}
hbase(main):001:0 create 't2', {NAME = 'f1'}, {NAME = 'f1'}

The table got created - with 1 column family:

hbase(main):002:0 describe 't2'
DESCRIPTION
   ENABLED
 't2', {NAME = 'f1', DATA_BLOCK_ENCODING = 'NONE', BLOOMFILTER = 'ROW',
REPLICATION_SCOPE = '0 true
 ', VERSIONS = '1', COMPRESSION = 'NONE', MIN_VERSIONS = '0', TTL =
'2147483647', KEEP_DELETED
 _CELLS = 'false', BLOCKSIZE = '65536', IN_MEMORY = 'false', BLOCKCACHE
= 'true'}
1 row(s) in 0.1000 seconds
{quote}


 Table create with duplicate column family names quietly succeeds
 

 Key: HBASE-12329
 URL: https://issues.apache.org/jira/browse/HBASE-12329
 Project: HBase
  Issue Type: Bug
  Components: Client, shell
Reporter: Sean Busbey
Priority: Minor

 From the mailing list
 {quote}
 I was expecting that it is forbidden, **but** this call does not throw any
 exception
 {code}
 String[] families = {cf, cf};
 HTableDescriptor desc = new HTableDescriptor(name);
 for (String cf : families) {
   HColumnDescriptor coldef = new HColumnDescriptor(cf);
   desc.addFamily(coldef);
 }
 try {
 admin.createTable(desc);
 } catch (TableExistsException e) {
 throw new IOException(table \' + name + \' already exists);
 }
 {code}
 {quote}
 And Ted's follow up replicates in the shell
 {code}
 hbase(main):001:0 create 't2', {NAME = 'f1'}, {NAME = 'f1'}
 The table got created - with 1 column family:
 hbase(main):002:0 describe 't2'
 DESCRIPTION
ENABLED
  't2', {NAME = 'f1', DATA_BLOCK_ENCODING = 'NONE', BLOOMFILTER = 'ROW',
 REPLICATION_SCOPE = '0 true
  ', VERSIONS = '1', COMPRESSION = 'NONE', MIN_VERSIONS = '0', TTL =
 '2147483647', KEEP_DELETED
  _CELLS = 'false', BLOCKSIZE = '65536', IN_MEMORY = 'false', BLOCKCACHE
 = 'true'}
 1 row(s) in 0.1000 seconds
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12329) Table create with duplicate column family names quietly succeeds

2014-10-23 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181434#comment-14181434
 ] 

Sean Busbey commented on HBASE-12329:
-

I'd be curious what happens if you give different per-CF options.

It might be safe to just issue a WARN in the shell case and have the call to 
``desc.addFamily`` throw. Opinions?

 Table create with duplicate column family names quietly succeeds
 

 Key: HBASE-12329
 URL: https://issues.apache.org/jira/browse/HBASE-12329
 Project: HBase
  Issue Type: Bug
  Components: Client, shell
Reporter: Sean Busbey
Priority: Minor

 From the mailing list
 {quote}
 I was expecting that it is forbidden, **but** this call does not throw any
 exception
 {code}
 String[] families = {cf, cf};
 HTableDescriptor desc = new HTableDescriptor(name);
 for (String cf : families) {
   HColumnDescriptor coldef = new HColumnDescriptor(cf);
   desc.addFamily(coldef);
 }
 try {
 admin.createTable(desc);
 } catch (TableExistsException e) {
 throw new IOException(table \' + name + \' already exists);
 }
 {code}
 {quote}
 And Ted's follow up replicates in the shell
 {code}
 hbase(main):001:0 create 't2', {NAME = 'f1'}, {NAME = 'f1'}
 The table got created - with 1 column family:
 hbase(main):002:0 describe 't2'
 DESCRIPTION
ENABLED
  't2', {NAME = 'f1', DATA_BLOCK_ENCODING = 'NONE', BLOOMFILTER = 'ROW',
 REPLICATION_SCOPE = '0 true
  ', VERSIONS = '1', COMPRESSION = 'NONE', MIN_VERSIONS = '0', TTL =
 '2147483647', KEEP_DELETED
  _CELLS = 'false', BLOCKSIZE = '65536', IN_MEMORY = 'false', BLOCKCACHE
 = 'true'}
 1 row(s) in 0.1000 seconds
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12325) Add Utility to remove snapshot from a directory

2014-10-23 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181489#comment-14181489
 ] 

Elliott Clark commented on HBASE-12325:
---

bq.The initial design was exporting from one hbase cluster to another,
Export to HDFS has been there for a long time now. What the initial design was 
doesn't matter much. It's all about what's useful now.

bq.anyway, you are using export in a wrong way 
Disagree.  This is an easy way to get differential backups that are easily 
importable and clean up is pretty easy. If HDFS ever gets hard links then this 
can go away. But that doesn't look like it will ever happen.

So users can get differential backups right now. By
# Export all of your snapshots into the same directory on a remote HDFS.
# Keep N snapshots removing the oldest after each export.
# That's it. There are no other steps required.

bqor, you can use the tool. but the tool requires coordination.

I'm working on a tool that doesn't require any coordination. It just removes 
the files from the oldest snapshot that are not referenced any more. So as long 
as the oldest snapshot is not still in transition (pretty easy if you're 
keeping more than 2 snapshots). Then you can run clean up and snapshot in 
parallel.

bq.in my opinion this tool doesn't belong to hbase-core
Users of HBase can benefit from a way of doing differential backups. Right now 
this is the best way to do that.

 Add Utility to remove snapshot from a directory
 ---

 Key: HBASE-12325
 URL: https://issues.apache.org/jira/browse/HBASE-12325
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Attachments: DeleteRemoteSnapshotTool.java


 If there are several snapshots exported to a single directory, it's nice to 
 be able to remove the oldest one. Since snapshots in the same directory can 
 share files it's not as simple as just removing all files in a snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12328) Need to separate JvmMetrics for Master and RegionServer

2014-10-23 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181492#comment-14181492
 ] 

Elliott Clark commented on HBASE-12328:
---

Failure looks related.

 Need to separate JvmMetrics for Master and RegionServer
 ---

 Key: HBASE-12328
 URL: https://issues.apache.org/jira/browse/HBASE-12328
 Project: HBase
  Issue Type: Improvement
Reporter: Sanghyun Yun
Priority: Minor
 Attachments: HBASE-12328.patch


 tag.ProcessName of JvmMetrics is IPC.
 It is same both Master and RegionServer.
 {code:title=HBase(Master and RegionServer)'s Metrics Dump}
 ...
 name: Hadoop:service=HBase,name=JvmMetrics,
 modelerType: JvmMetrics,
 tag.Context: jvm,
 tag.ProcessName: IPC,
 tag.SessionId: ,
 ...
 {code}
 When I use HBase with Ganglia,
 I wrote tagsForPrefix.jvm=ProcessName in hadoop-metrics2-hbase.properties.
 {code:title=hadoop-metrics2-hbase.properties}
 ...
 *.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
 hbase.sink.ganglia.tagsForPrefix.jvm=ProcessName
 ...
 {code}
 But, Ganglia generate only one RRD file because tag.ProcessName is IPC both 
 Master and Regionserver.
 I think it need to separate JvmMetrics for Master and RegionServer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12325) Add Utility to remove snapshot from a directory

2014-10-23 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181504#comment-14181504
 ] 

Matteo Bertozzi commented on HBASE-12325:
-

{quote}
I'm working on a tool that doesn't require any coordination. It just removes 
the files from the oldest snapshot that are not referenced any more. So as long 
as the oldest snapshot is not still in transition (pretty easy if you're 
keeping more than 2 snapshots). Then you can run clean up and snapshot in 
parallel.
{quote}
on the last sentence you are implying coordination (export and clean, not 
snapshot... snapshot are hbase local and don't impact what you do on the remote 
copy)
if you look at what I have attached it is basically what you are describing.


 Add Utility to remove snapshot from a directory
 ---

 Key: HBASE-12325
 URL: https://issues.apache.org/jira/browse/HBASE-12325
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Attachments: DeleteRemoteSnapshotTool.java


 If there are several snapshots exported to a single directory, it's nice to 
 be able to remove the oldest one. Since snapshots in the same directory can 
 share files it's not as simple as just removing all files in a snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12325) Add Utility to remove snapshot from a directory

2014-10-23 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181515#comment-14181515
 ] 

Elliott Clark commented on HBASE-12325:
---

bq.on the last sentence you are implying coordination (export and clean, not 
snapshot... snapshot are hbase local and don't impact what you do on the remote 
copy)

Export, snapshot, and clean up can all run in parallel as long as you are 
keeping more than one snapshot.

 Add Utility to remove snapshot from a directory
 ---

 Key: HBASE-12325
 URL: https://issues.apache.org/jira/browse/HBASE-12325
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Attachments: DeleteRemoteSnapshotTool.java


 If there are several snapshots exported to a single directory, it's nice to 
 be able to remove the oldest one. Since snapshots in the same directory can 
 share files it's not as simple as just removing all files in a snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12328) Need to separate JvmMetrics for Master and RegionServer

2014-10-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181517#comment-14181517
 ] 

stack commented on HBASE-12328:
---

Does name : Hadoop:service=HBase,name=IPC,sub=IPC have the same issue? 
Thanks.

 Need to separate JvmMetrics for Master and RegionServer
 ---

 Key: HBASE-12328
 URL: https://issues.apache.org/jira/browse/HBASE-12328
 Project: HBase
  Issue Type: Improvement
Reporter: Sanghyun Yun
Priority: Minor
 Attachments: HBASE-12328.patch


 tag.ProcessName of JvmMetrics is IPC.
 It is same both Master and RegionServer.
 {code:title=HBase(Master and RegionServer)'s Metrics Dump}
 ...
 name: Hadoop:service=HBase,name=JvmMetrics,
 modelerType: JvmMetrics,
 tag.Context: jvm,
 tag.ProcessName: IPC,
 tag.SessionId: ,
 ...
 {code}
 When I use HBase with Ganglia,
 I wrote tagsForPrefix.jvm=ProcessName in hadoop-metrics2-hbase.properties.
 {code:title=hadoop-metrics2-hbase.properties}
 ...
 *.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
 hbase.sink.ganglia.tagsForPrefix.jvm=ProcessName
 ...
 {code}
 But, Ganglia generate only one RRD file because tag.ProcessName is IPC both 
 Master and Regionserver.
 I think it need to separate JvmMetrics for Master and RegionServer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-6635) Refactor HFile version selection and exception handling.

2014-10-23 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6635:
-
Attachment: hfile.png

Lets also squash all this hfilereaderv2 and v3 and abstract hfilereader and 
scannerv3 + abstractscanner + scannerv2, etc.  See attached diagram for view on 
some of the convolutions we've made.

 Refactor HFile version selection and exception handling.
 

 Key: HBASE-6635
 URL: https://issues.apache.org/jira/browse/HBASE-6635
 Project: HBase
  Issue Type: Bug
Reporter: Jonathan Hsieh
 Attachments: hfile.png


 Trunk and 0.94's HFile code has some fairly convoluted code for bypassing 
 checksums and has mixed usage of runtime and io exceptions when error 
 conditions arise.  This jira would clean up the code to have better 
 encapsulation and be more explicit about what kinds of exceptions are thrown 
 and what they mean.  (This was partially spurred by comments in reviews of 
 HBASE-6586).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12285) Builds are failing, possibly because of SUREFIRE-1091

2014-10-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181622#comment-14181622
 ] 

stack commented on HBASE-12285:
---

[~dimaspivak] Am game for trying any experiment you want.  Make a patch for the 
no reuse and I'll shove it in along w/ WARN to see if it helps.

 Builds are failing, possibly because of SUREFIRE-1091
 -

 Key: HBASE-12285
 URL: https://issues.apache.org/jira/browse/HBASE-12285
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Dima Spivak
Assignee: Dima Spivak
Priority: Blocker
 Attachments: HBASE-12285_branch-1_v1.patch


 Our branch-1 builds on builds.apache.org have been failing in recent days 
 after we switched over to an official version of Surefire a few days back 
 (HBASE-4955). The version we're using, 2.17, is hit by a bug 
 ([SUREFIRE-1091|https://jira.codehaus.org/browse/SUREFIRE-1091]) that results 
 in an IOException, which looks like what we're seeing on Jenkins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12312) Another couple of createTable race conditions

2014-10-23 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12312:
--
Attachment: HBASE-12312_master_v3 (1).patch

 Another couple of createTable race conditions
 -

 Key: HBASE-12312
 URL: https://issues.apache.org/jira/browse/HBASE-12312
 Project: HBase
  Issue Type: Bug
Reporter: Dima Spivak
Assignee: Dima Spivak
 Attachments: HBASE-12312_master_v1.patch, 
 HBASE-12312_master_v2.patch, HBASE-12312_master_v3 (1).patch, 
 HBASE-12312_master_v3.patch, HBASE-12312_master_v3.patch, 
 HBASE-12312_master_v3.patch


 Found a couple more failing tests in TestAccessController and 
 TestScanEarlyTermination caused by my favorite race condition. :) Will post a 
 patch in a second.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12285) Builds are failing, possibly because of SUREFIRE-1091

2014-10-23 Thread Dima Spivak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181626#comment-14181626
 ] 

Dima Spivak commented on HBASE-12285:
-

The reuseForks=false could just be set in the mvn command line in the branch-1 
Jenkins job. 

 Builds are failing, possibly because of SUREFIRE-1091
 -

 Key: HBASE-12285
 URL: https://issues.apache.org/jira/browse/HBASE-12285
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Dima Spivak
Assignee: Dima Spivak
Priority: Blocker
 Attachments: HBASE-12285_branch-1_v1.patch


 Our branch-1 builds on builds.apache.org have been failing in recent days 
 after we switched over to an official version of Surefire a few days back 
 (HBASE-4955). The version we're using, 2.17, is hit by a bug 
 ([SUREFIRE-1091|https://jira.codehaus.org/browse/SUREFIRE-1091]) that results 
 in an IOException, which looks like what we're seeing on Jenkins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11125) Introduce a higher level interface for registering interest in coprocessor upcalls

2014-10-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181625#comment-14181625
 ] 

stack commented on HBASE-11125:
---

Should we remove this issue from 1.0 since it has no assignee and is seeing no 
progress?

 Introduce a higher level interface for registering interest in coprocessor 
 upcalls
 --

 Key: HBASE-11125
 URL: https://issues.apache.org/jira/browse/HBASE-11125
 Project: HBase
  Issue Type: New Feature
Reporter: Andrew Purtell
Priority: Critical
 Fix For: 0.99.2


 We should introduce a higher level interface for managing the registration of 
 'user' code for execution from the low level hooks. It should not be 
 necessary for coprocessor implementers to learn the universe of available low 
 level hooks and the subtleties of their placement within HBase core code. 
 Instead the higher level API should allow the implementer to describe their 
 intent and then this API should choose the appropriate low level hook 
 placement.
 A very desirable side effect is a layer of indirection between coprocessor 
 implementers and the actual hooks. This will address the perennial complaint 
 that the low level hooks change too much from release to release, as recently 
 discussed during the RM panel at HBaseCon. If we try to avoid changing the 
 particular placement and arguments of hook functions in response to those 
 complaints, this can be an onerous constraint on necessary internals 
 evolution. Instead we can direct coprocessor implementers to consider the new 
 API and provide the same interface stability guarantees there as we do for 
 client API, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-9864) Notifications bus for use by cluster members keeping up-to-date on changes

2014-10-23 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-9864:
-
Fix Version/s: (was: 0.99.2)

 Notifications bus for use by cluster members keeping up-to-date on changes
 --

 Key: HBASE-9864
 URL: https://issues.apache.org/jira/browse/HBASE-9864
 Project: HBase
  Issue Type: Brainstorming
Reporter: stack
Priority: Blocker

 In namespaces and acls, zk callbacks are used so all participating servers 
 are notified when there is a change in acls/namespaces list.
 The new visibility tags feature coming in copies the same model of using zk 
 with listeners for the features' particular notifications.
 Three systems each w/ their own implementation of the notifications all using 
 zk w/ their own feature-specific watchers.
 Should probably unify.
 Do we have to go via zk?  Seems like all want to be notified when an hbase 
 table is updated.  Could we tell servers directly rather than go via zk?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-9864) Notifications bus for use by cluster members keeping up-to-date on changes

2014-10-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181634#comment-14181634
 ] 

stack commented on HBASE-9864:
--

Removed from branch-1 because no assignee and not being worked on.

 Notifications bus for use by cluster members keeping up-to-date on changes
 --

 Key: HBASE-9864
 URL: https://issues.apache.org/jira/browse/HBASE-9864
 Project: HBase
  Issue Type: Brainstorming
Reporter: stack
Priority: Blocker

 In namespaces and acls, zk callbacks are used so all participating servers 
 are notified when there is a change in acls/namespaces list.
 The new visibility tags feature coming in copies the same model of using zk 
 with listeners for the features' particular notifications.
 Three systems each w/ their own implementation of the notifications all using 
 zk w/ their own feature-specific watchers.
 Should probably unify.
 Do we have to go via zk?  Seems like all want to be notified when an hbase 
 table is updated.  Could we tell servers directly rather than go via zk?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-9864) Notifications bus for use by cluster members keeping up-to-date on changes

2014-10-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181636#comment-14181636
 ] 

stack commented on HBASE-9864:
--

bq  not being worked on.

... in time for 1.0 release (to my knowledge -- correct me if I am wrong)

 Notifications bus for use by cluster members keeping up-to-date on changes
 --

 Key: HBASE-9864
 URL: https://issues.apache.org/jira/browse/HBASE-9864
 Project: HBase
  Issue Type: Brainstorming
Reporter: stack
Priority: Blocker

 In namespaces and acls, zk callbacks are used so all participating servers 
 are notified when there is a change in acls/namespaces list.
 The new visibility tags feature coming in copies the same model of using zk 
 with listeners for the features' particular notifications.
 Three systems each w/ their own implementation of the notifications all using 
 zk w/ their own feature-specific watchers.
 Should probably unify.
 Do we have to go via zk?  Seems like all want to be notified when an hbase 
 table is updated.  Could we tell servers directly rather than go via zk?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12330) compaction bandwidth limit

2014-10-23 Thread Ming Chen (JIRA)
Ming Chen created HBASE-12330:
-

 Summary: compaction bandwidth limit
 Key: HBASE-12330
 URL: https://issues.apache.org/jira/browse/HBASE-12330
 Project: HBase
  Issue Type: New Feature
  Components: Admin, Compaction
Affects Versions: 0.89-fb
 Environment: software platform
Reporter: Ming Chen
Priority: Minor
 Fix For: 0.89-fb


Compaction runs to its full speed. This change provides two knobs to limit the 
compaction bandwidth: 
1 compactBwLimit: the desired max bandwidth per compaction. If the compaction 
thread runs too fast, it will be put to sleep for a while;
2. numOfFilesDisableCompactLimit: Limiting the compaction speed will increase 
the # of files per store which increases the read latency. This parameter can 
disable the compaction limit when the number of files in a store exceeds the 
value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10856) Prep for 1.0

2014-10-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181644#comment-14181644
 ] 

stack commented on HBASE-10856:
---

I removed links from HBASE-11122 Annotate coprocessor APIs, HBASE-11124 
Pluggable major compaction strategy, HBASE-9864 Notifications bus for use by 
cluster members keeping up-to-date on changes because not being worked on (to 
best of my knowledge -- at least not in time for a 1.0 -- correct me if wrong).

I also removed HBASE-11125 Introduce a higher level interface for registering 
interest in coprocessor upcalls though it still has fix version of 0.99.2 so 
it is still related to 1.0.

 Prep for 1.0
 

 Key: HBASE-10856
 URL: https://issues.apache.org/jira/browse/HBASE-10856
 Project: HBase
  Issue Type: Umbrella
Reporter: stack
 Fix For: 0.99.2


 Tasks for 1.0 copied here from our '1.0.0' mailing list discussion.  Idea is 
 to file subtasks off this one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10856) Prep for 1.0

2014-10-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181652#comment-14181652
 ] 

stack commented on HBASE-10856:
---

I also remove HBASE-5827 [Coprocessors] Observer notifications on exceptions 
as a link to this issue.

 Prep for 1.0
 

 Key: HBASE-10856
 URL: https://issues.apache.org/jira/browse/HBASE-10856
 Project: HBase
  Issue Type: Umbrella
Reporter: stack
 Fix For: 0.99.2


 Tasks for 1.0 copied here from our '1.0.0' mailing list discussion.  Idea is 
 to file subtasks off this one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11964) Improve spreading replication load from failed regionservers

2014-10-23 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181657#comment-14181657
 ] 

Andrew Purtell commented on HBASE-11964:


Cool, going to commit unless objection later today.

 Improve spreading replication load from failed regionservers
 

 Key: HBASE-11964
 URL: https://issues.apache.org/jira/browse/HBASE-11964
 Project: HBase
  Issue Type: Sub-task
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 2.0.0, 0.98.8, 0.94.25, 0.99.2

 Attachments: HBASE-11964.patch, HBASE-11964.patch, HBASE-11964.patch


 Improve replication source thread handling. Improve fanout when transferring 
 queues. Ensure replication sources terminate properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11125) Introduce a higher level interface for registering interest in coprocessor upcalls

2014-10-23 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181659#comment-14181659
 ] 

Andrew Purtell commented on HBASE-11125:


bq. Should we remove this issue from 1.0 since it has no assignee and is seeing 
no progress?

Yes. Maybe for 1.1 or 2.0 

 Introduce a higher level interface for registering interest in coprocessor 
 upcalls
 --

 Key: HBASE-11125
 URL: https://issues.apache.org/jira/browse/HBASE-11125
 Project: HBase
  Issue Type: New Feature
Reporter: Andrew Purtell
Priority: Critical
 Fix For: 0.99.2


 We should introduce a higher level interface for managing the registration of 
 'user' code for execution from the low level hooks. It should not be 
 necessary for coprocessor implementers to learn the universe of available low 
 level hooks and the subtleties of their placement within HBase core code. 
 Instead the higher level API should allow the implementer to describe their 
 intent and then this API should choose the appropriate low level hook 
 placement.
 A very desirable side effect is a layer of indirection between coprocessor 
 implementers and the actual hooks. This will address the perennial complaint 
 that the low level hooks change too much from release to release, as recently 
 discussed during the RM panel at HBaseCon. If we try to avoid changing the 
 particular placement and arguments of hook functions in response to those 
 complaints, this can be an onerous constraint on necessary internals 
 evolution. Instead we can direct coprocessor implementers to consider the new 
 API and provide the same interface stability guarantees there as we do for 
 client API, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11125) Introduce a higher level interface for registering interest in coprocessor upcalls

2014-10-23 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11125:
--
Fix Version/s: (was: 0.99.2)

 Introduce a higher level interface for registering interest in coprocessor 
 upcalls
 --

 Key: HBASE-11125
 URL: https://issues.apache.org/jira/browse/HBASE-11125
 Project: HBase
  Issue Type: New Feature
Reporter: Andrew Purtell
Priority: Critical

 We should introduce a higher level interface for managing the registration of 
 'user' code for execution from the low level hooks. It should not be 
 necessary for coprocessor implementers to learn the universe of available low 
 level hooks and the subtleties of their placement within HBase core code. 
 Instead the higher level API should allow the implementer to describe their 
 intent and then this API should choose the appropriate low level hook 
 placement.
 A very desirable side effect is a layer of indirection between coprocessor 
 implementers and the actual hooks. This will address the perennial complaint 
 that the low level hooks change too much from release to release, as recently 
 discussed during the RM panel at HBaseCon. If we try to avoid changing the 
 particular placement and arguments of hook functions in response to those 
 complaints, this can be an onerous constraint on necessary internals 
 evolution. Instead we can direct coprocessor implementers to consider the new 
 API and provide the same interface stability guarantees there as we do for 
 client API, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12285) Builds are failing, possibly because of SUREFIRE-1091

2014-10-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181669#comment-14181669
 ] 

stack commented on HBASE-12285:
---

Done. Made it this in the branch-1 build:

-PrunAllTests -DreuseForks=false  -Dmaven.test.redirectTestOutputToFile=true 
install -Dsurefire.secondPartThreadCount=2 -Dit.test=noItTest

Why trunk mostly pass and branch-1 does not?  Surefire is same for both?

Let this run a while (I started a build).  Will put in the WARN later after 
this change has baked a while.

 Builds are failing, possibly because of SUREFIRE-1091
 -

 Key: HBASE-12285
 URL: https://issues.apache.org/jira/browse/HBASE-12285
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Dima Spivak
Assignee: Dima Spivak
Priority: Blocker
 Attachments: HBASE-12285_branch-1_v1.patch


 Our branch-1 builds on builds.apache.org have been failing in recent days 
 after we switched over to an official version of Surefire a few days back 
 (HBASE-4955). The version we're using, 2.17, is hit by a bug 
 ([SUREFIRE-1091|https://jira.codehaus.org/browse/SUREFIRE-1091]) that results 
 in an IOException, which looks like what we're seeing on Jenkins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11870) Optimization : Avoid copy of key and value for tags addition in AC and VC

2014-10-23 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181689#comment-14181689
 ] 

Andrew Purtell commented on HBASE-11870:


lgtm

 Optimization : Avoid copy of key and value for tags addition in AC and VC
 -

 Key: HBASE-11870
 URL: https://issues.apache.org/jira/browse/HBASE-11870
 Project: HBase
  Issue Type: Improvement
  Components: Performance, security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: HBASE-11870.patch


 In AC and VC we have to add the per cell ACL tags/ visibility tags to Cells. 
 We get KeyValue objects and which need one backing array with key,value and 
 tags. So in order to add a tag we have to recreate buffer the and copy the 
 entire key , value and tags.  We can avoid this
 Create a new Cell impl which wraps the original Cell and fro the non tag 
 parts just refer this old buffer.
 This will contain a byte[] state for the tags part.
 Also we have to ensure we deal with Cells n write path not KV.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12142) Truncate command does not preserve ACLs table

2014-10-23 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181691#comment-14181691
 ] 

Andrew Purtell commented on HBASE-12142:


bq.  Which parts would have to be applied to branch-1 and master?

None it seems, I was looking at the 0.98 patch

 Truncate command does not preserve ACLs table
 -

 Key: HBASE-12142
 URL: https://issues.apache.org/jira/browse/HBASE-12142
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6
Reporter: Vandana Ayyalasomayajula
Assignee: Vandana Ayyalasomayajula
Priority: Minor
  Labels: security
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-12142_0.patch, HBASE-12142_1.patch, 
 HBASE-12142_2.patch, HBASE-12142_98.patch, HBASE-12142_branch_1.patch, 
 HBASE-12142_master_addendum.patch


 The current truncate command does not preserve acls on a table. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11164) Document and test rolling updates from 0.98 - 1.0

2014-10-23 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11164:
--
Assignee: stack

 Document and test rolling updates from 0.98 - 1.0
 --

 Key: HBASE-11164
 URL: https://issues.apache.org/jira/browse/HBASE-11164
 Project: HBase
  Issue Type: Sub-task
Reporter: Enis Soztutar
Assignee: stack
Priority: Critical
 Fix For: 0.99.2


 I think 1.0 should be rolling upgradable from 0.98 unless we break it 
 intentionally for a specific reason. Unless there is such an issue, lets 
 document that 1.0 and 0.98 should be rolling upgrade compatible. 
 We should also test this before the 0.99 release. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11164) Document and test rolling updates from 0.98 - 1.0

2014-10-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181694#comment-14181694
 ] 

stack commented on HBASE-11164:
---

Trying to go from 0.96 to 0.99.  I stopped the 0.96 master in a 0.96 cluster 
and tried to start the 0.99 master and got below:

{code}
2014-10-23 11:12:37,537 FATAL [c2020:16020.activeMasterManager] master.HMaster: 
Unhandled exception. Starting shutdown.
org.apache.hadoop.hbase.TableNotFoundException: hbase:namespace
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1189)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1090)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1074)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1031)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getRegionLocation(ConnectionManager.java:865)
at 
org.apache.hadoop.hbase.client.RegionServerCallable.prepare(RegionServerCallable.java:78)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:110)
at org.apache.hadoop.hbase.client.HTable.get(HTable.java:847)
at 
org.apache.hadoop.hbase.master.TableNamespaceManager.get(TableNamespaceManager.java:138)
at 
org.apache.hadoop.hbase.master.TableNamespaceManager.isTableAvailableAndInitialized(TableNamespaceManager.java:268)
at 
org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:109)
at 
org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:749)
at 
org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:629)
at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:157)
at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1290)
at java.lang.Thread.run(Thread.java:745)

{code}

 Document and test rolling updates from 0.98 - 1.0
 --

 Key: HBASE-11164
 URL: https://issues.apache.org/jira/browse/HBASE-11164
 Project: HBase
  Issue Type: Sub-task
Reporter: Enis Soztutar
Assignee: stack
Priority: Critical
 Fix For: 0.99.2


 I think 1.0 should be rolling upgradable from 0.98 unless we break it 
 intentionally for a specific reason. Unless there is such an issue, lets 
 document that 1.0 and 0.98 should be rolling upgrade compatible. 
 We should also test this before the 0.99 release. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12142) Truncate command does not preserve ACLs table

2014-10-23 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181695#comment-14181695
 ] 

Andrew Purtell commented on HBASE-12142:


bq.  Let me upload a new patch.

Thank you [~avandana]

 Truncate command does not preserve ACLs table
 -

 Key: HBASE-12142
 URL: https://issues.apache.org/jira/browse/HBASE-12142
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6
Reporter: Vandana Ayyalasomayajula
Assignee: Vandana Ayyalasomayajula
Priority: Minor
  Labels: security
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-12142_0.patch, HBASE-12142_1.patch, 
 HBASE-12142_2.patch, HBASE-12142_98.patch, HBASE-12142_branch_1.patch, 
 HBASE-12142_master_addendum.patch


 The current truncate command does not preserve acls on a table. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11645) Snapshot for MOB

2014-10-23 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181702#comment-14181702
 ] 

Jonathan Hsieh commented on HBASE-11645:


Hey folks, I'm going to commit to the branch even though I have some concerns 
with it because there is a lot of good stuff in here. We can fix the concerns 
in the branch.  I'll file some follow on issues.

(should use FileLink to handle read retry, tests have a lot of duplication and 
take a long time to run)

 Snapshot for MOB
 

 Key: HBASE-11645
 URL: https://issues.apache.org/jira/browse/HBASE-11645
 Project: HBase
  Issue Type: Sub-task
  Components: snapshots
Reporter: Jingcheng Du
Assignee: Jingcheng Du
 Attachments: HBASE-11645-V2.diff, HBASE-11645-V3.diff, 
 HBASE-11645-V4.diff, HBASE-11645.diff


  Add snapshot support for MOB.  In the initial implementation, taking a table 
 snapshot does not preserve the mob data.  This issue will make sure that when 
 a snapshot is taken, mob data is properly preserved and is restorable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12331) Shorten the mob snapshot unit tests

2014-10-23 Thread Jonathan Hsieh (JIRA)
Jonathan Hsieh created HBASE-12331:
--

 Summary: Shorten the mob snapshot unit tests
 Key: HBASE-12331
 URL: https://issues.apache.org/jira/browse/HBASE-12331
 Project: HBase
  Issue Type: Sub-task
  Components: mob
Affects Versions: hbase-11339
Reporter: Jonathan Hsieh


The mob snapshot patch introduced a whole log of tests that take a long time to 
run and would be better as integration tests.

{code}
---
 T E S T S
---
Running 
org.apache.hadoop.hbase.client.TestMobRestoreSnapshotFromClientWithRegionReplicas
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 394.803 sec - 
in 
org.apache.hadoop.hbase.client.TestMobRestoreSnapshotFromClientWithRegionReplicas
Running org.apache.hadoop.hbase.client.TestMobRestoreSnapshotFromClient
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 212.377 sec - 
in org.apache.hadoop.hbase.client.TestMobRestoreSnapshotFromClient
Running 
org.apache.hadoop.hbase.client.TestMobSnapshotFromClientWithRegionReplicas
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.463 sec - in 
org.apache.hadoop.hbase.client.TestMobSnapshotFromClientWithRegionReplicas
Running org.apache.hadoop.hbase.client.TestMobSnapshotFromClient
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.724 sec - in 
org.apache.hadoop.hbase.client.TestMobSnapshotFromClient
Running org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 204.03 sec - in 
org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient
Running 
org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClientWithRegionReplicas
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 214.052 sec - 
in 
org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClientWithRegionReplicas
Running org.apache.hadoop.hbase.client.TestMobSnapshotCloneIndependence
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 105.139 sec - 
in org.apache.hadoop.hbase.client.TestMobSnapshotCloneIndependence
Running org.apache.hadoop.hbase.regionserver.TestMobStoreScanner
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.42 sec - in 
org.apache.hadoop.hbase.regionserver.TestMobStoreScanner
Running org.apache.hadoop.hbase.regionserver.TestDeleteMobTable
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.136 sec - in 
org.apache.hadoop.hbase.regionserver.TestDeleteMobTable
Running org.apache.hadoop.hbase.regionserver.TestHMobStore
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.09 sec - in 
org.apache.hadoop.hbase.regionserver.TestHMobStore
Running org.apache.hadoop.hbase.regionserver.TestMobCompaction
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.629 sec - in 
org.apache.hadoop.hbase.regionserver.TestMobCompaction
Running org.apache.hadoop.hbase.mob.TestCachedMobFile
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.301 sec - in 
org.apache.hadoop.hbase.mob.TestCachedMobFile
Running org.apache.hadoop.hbase.mob.mapreduce.TestMobSweepJob
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.752 sec - in 
org.apache.hadoop.hbase.mob.mapreduce.TestMobSweepJob
Running org.apache.hadoop.hbase.mob.mapreduce.TestMobSweepReducer
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.276 sec - in 
org.apache.hadoop.hbase.mob.mapreduce.TestMobSweepReducer
Running org.apache.hadoop.hbase.mob.mapreduce.TestMobSweepMapper
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.46 sec - in 
org.apache.hadoop.hbase.mob.mapreduce.TestMobSweepMapper
Running org.apache.hadoop.hbase.mob.mapreduce.TestMobSweeper
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 173.05 sec - in 
org.apache.hadoop.hbase.mob.mapreduce.TestMobSweeper
Running org.apache.hadoop.hbase.mob.TestMobDataBlockEncoding
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.86 sec - in 
org.apache.hadoop.hbase.mob.TestMobDataBlockEncoding
Running org.apache.hadoop.hbase.mob.TestExpiredMobFileCleaner
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.029 sec - in 
org.apache.hadoop.hbase.mob.TestExpiredMobFileCleaner
Running org.apache.hadoop.hbase.mob.TestMobFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.562 sec - in 
org.apache.hadoop.hbase.mob.TestMobFile
Running org.apache.hadoop.hbase.mob.TestMobFileCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.173 sec - in 
org.apache.hadoop.hbase.mob.TestMobFileCache
Running org.apache.hadoop.hbase.mob.TestDefaultMobStoreFlusher
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.586 sec - in 
org.apache.hadoop.hbase.mob.TestDefaultMobStoreFlusher
Running 

[jira] [Commented] (HBASE-12285) Builds are failing, possibly because of SUREFIRE-1091

2014-10-23 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181712#comment-14181712
 ] 

Elliott Clark commented on HBASE-12285:
---

Do we know anyone in the surefire community? At FB we're running the snapshot 
version of surefire and it's fixed everything for us.

 Builds are failing, possibly because of SUREFIRE-1091
 -

 Key: HBASE-12285
 URL: https://issues.apache.org/jira/browse/HBASE-12285
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Dima Spivak
Assignee: Dima Spivak
Priority: Blocker
 Attachments: HBASE-12285_branch-1_v1.patch


 Our branch-1 builds on builds.apache.org have been failing in recent days 
 after we switched over to an official version of Surefire a few days back 
 (HBASE-4955). The version we're using, 2.17, is hit by a bug 
 ([SUREFIRE-1091|https://jira.codehaus.org/browse/SUREFIRE-1091]) that results 
 in an IOException, which looks like what we're seeing on Jenkins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12332) [mob] use filelink instad of retry when resolving an hfilelink.

2014-10-23 Thread Jonathan Hsieh (JIRA)
Jonathan Hsieh created HBASE-12332:
--

 Summary: [mob] use filelink instad of retry when resolving an 
hfilelink.
 Key: HBASE-12332
 URL: https://issues.apache.org/jira/browse/HBASE-12332
 Project: HBase
  Issue Type: Sub-task
  Components: mob
Affects Versions: hbase-11339
Reporter: Jonathan Hsieh


in the snapshot code, hmobstore was modified to traverse an hfile link to a 
mob.   Ideally this should use the transparent filelink code to read the data.

Also there will likely be some issues with the mob file cache with these links.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12142) Truncate command does not preserve ACLs table

2014-10-23 Thread Vandana Ayyalasomayajula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vandana Ayyalasomayajula updated HBASE-12142:
-
Attachment: HBASE-12142_98_2.patch

Patch without any changes to create and delete table handlers. 

 Truncate command does not preserve ACLs table
 -

 Key: HBASE-12142
 URL: https://issues.apache.org/jira/browse/HBASE-12142
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6
Reporter: Vandana Ayyalasomayajula
Assignee: Vandana Ayyalasomayajula
Priority: Minor
  Labels: security
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-12142_0.patch, HBASE-12142_1.patch, 
 HBASE-12142_2.patch, HBASE-12142_98.patch, HBASE-12142_98_2.patch, 
 HBASE-12142_branch_1.patch, HBASE-12142_master_addendum.patch


 The current truncate command does not preserve acls on a table. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-11645) Snapshot for MOB

2014-10-23 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh resolved HBASE-11645.

   Resolution: Fixed
Fix Version/s: hbase-11339
 Hadoop Flags: Reviewed

 Snapshot for MOB
 

 Key: HBASE-11645
 URL: https://issues.apache.org/jira/browse/HBASE-11645
 Project: HBase
  Issue Type: Sub-task
  Components: snapshots
Reporter: Jingcheng Du
Assignee: Jingcheng Du
 Fix For: hbase-11339

 Attachments: HBASE-11645-V2.diff, HBASE-11645-V3.diff, 
 HBASE-11645-V4.diff, HBASE-11645.diff


  Add snapshot support for MOB.  In the initial implementation, taking a table 
 snapshot does not preserve the mob data.  This issue will make sure that when 
 a snapshot is taken, mob data is properly preserved and is restorable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12142) Truncate command does not preserve ACLs table

2014-10-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181733#comment-14181733
 ] 

Hadoop QA commented on HBASE-12142:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12676659/HBASE-12142_98_2.patch
  against trunk revision .
  ATTACHMENT ID: 12676659

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 12 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11454//console

This message is automatically generated.

 Truncate command does not preserve ACLs table
 -

 Key: HBASE-12142
 URL: https://issues.apache.org/jira/browse/HBASE-12142
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6
Reporter: Vandana Ayyalasomayajula
Assignee: Vandana Ayyalasomayajula
Priority: Minor
  Labels: security
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-12142_0.patch, HBASE-12142_1.patch, 
 HBASE-12142_2.patch, HBASE-12142_98.patch, HBASE-12142_98_2.patch, 
 HBASE-12142_branch_1.patch, HBASE-12142_master_addendum.patch


 The current truncate command does not preserve acls on a table. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12319) Inconsistencies during region recovery due to close/open of a region during recovery

2014-10-23 Thread Jeffrey Zhong (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181756#comment-14181756
 ] 

Jeffrey Zhong commented on HBASE-12319:
---

This issue is due to region opening is canceled while AM doesn't wait for the 
cancel completes and reassign the region immediately as shown in the following 
log lines. Therefore, the previous region open operation may overlap the new 
region assignment. This issue happen in 0.98  branch-1. 

{noformat}
hbase-hbase-master-hor9n01.gq1.ygridcore.net.log:Caused by: 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.NotServingRegionException):
 org.apache.hadoop.hbase.NotServingRegionException: The region 
51af4bd23dc32a940ad2dd5435f00e1d was opening but not yet served. Opening is 
cancelled.
hbase-hbase-master-hor9n01.gq1.ygridcore.net.log:2014-10-14 13:45:30,564 INFO  
[AM.-pool1-t8] master.RegionStates: Transitioned 
{51af4bd23dc32a940ad2dd5435f00e1d state=OPENING, ts=1413294330350, 
server=hor9n10.gq1.ygridcore.net,60020,1413293516978} to 
{51af4bd23dc32a940ad2dd5435f00e1d state=OFFLINE, ts=1413294330564, 
server=hor9n10.gq1.ygridcore.net,60020,1413293516978}
hbase-hbase-master-hor9n01.gq1.ygridcore.net.log:2014-10-14 13:45:30,566 DEBUG 
[AM.-pool1-t8] master.AssignmentManager: No previous transition plan found (or 
ignoring an existing plan) for 
IntegrationTestIngest,5994,1413293958381.51af4bd23dc32a940ad2dd5435f00e1d.; 
generated random 
plan=hri=IntegrationTestIngest,5994,1413293958381.51af4bd23dc32a940ad2dd5435f00e1d.,
 src=, dest=hor9n01.gq1.ygridcore.net,60020,1413294323616; 4 (online=4, 
available=4) available servers, forceNewPlan=true
hbase-hbase-master-hor9n01.gq1.ygridcore.net.log:2014-10-14 13:45:30,566 DEBUG 
[AM.-pool1-t8] zookeeper.ZKAssign: master:6-0x3490b3b07a1085e, 
quorum=hor9n08.gq1.ygridcore.net:2181,hor9n01.gq1.ygridcore.net:2181,hor9n10.gq1.ygridcore.net:2181,
 baseZNode=/hbase Creating (or updating) unassigned node 
51af4bd23dc32a940ad2dd5435f00e1d with OFFLINE state
hbase-hbase-master-hor9n01.gq1.ygridcore.net.log:2014-10-14 13:45:30,589 INFO  
[AM.-pool1-t8] master.AssignmentManager: Assigning 
IntegrationTestIngest,5994,1413293958381.51af4bd23dc32a940ad2dd5435f00e1d. 
to hor9n01.gq1.ygridcore.net,60020,1413294323616
{noformat}

 Inconsistencies during region recovery due to close/open of a region during 
 recovery
 

 Key: HBASE-12319
 URL: https://issues.apache.org/jira/browse/HBASE-12319
 Project: HBase
  Issue Type: Bug
Reporter: Devaraj Das
Assignee: Jeffrey Zhong

 In one of my test runs, I saw the following:
 {noformat}
 2014-10-14 13:45:30,782 DEBUG 
 [StoreOpener-51af4bd23dc32a940ad2dd5435f00e1d-1] regionserver.HStore: loaded 
 hdfs://hor9n01.gq1.ygridcore.net:8020/apps/hbase/data/data/default/IntegrationTestIngest/51af4bd23dc32a940ad2dd5435f00e1d/test_cf/d6df5cfe15ca41d68c619489fbde4d04,
  isReference=false, isBulkLoadResult=false, seqid=141197, majorCompaction=true
 2014-10-14 13:45:30,788 DEBUG [RS_OPEN_REGION-hor9n01:60020-1] 
 regionserver.HRegion: Found 3 recovered edits file(s) under 
 hdfs://hor9n01.gq1.ygridcore.net:8020/apps/hbase/data/data/default/IntegrationTestIngest/51af4bd23dc32a940ad2dd5435f00e1d
 .
 .
 2014-10-14 13:45:31,916 WARN  [RS_OPEN_REGION-hor9n01:60020-1] 
 regionserver.HRegion: Null or non-existent edits file: 
 hdfs://hor9n01.gq1.ygridcore.net:8020/apps/hbase/data/data/default/IntegrationTestIngest/51af4bd23dc32a940ad2dd5435f00e1d/recovered.edits/0198080
 {noformat}
 The above logs is from a regionserver, say RS2. From the initial analysis it 
 seemed like the master asked a certain regionserver to open the region (let's 
 say RS1) and for some reason asked it to close soon after. The open was still 
 proceeding on RS1 but the master reassigned the region to RS2. This also 
 started the recovery but it ended up seeing an inconsistent view of the 
 recovered-edits files (it reports missing files as per the logs above) since 
 the first regionserver (RS1) deleted some files after it completed the 
 recovery. When RS2 really opens the region, it might not see the recent data 
 that was written by flushes on hor9n10 during the recovery process. Reads of 
 that data would have inconsistencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12319) Inconsistencies during region recovery due to close/open of a region during recovery

2014-10-23 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong updated HBASE-12319:
--
Affects Version/s: 0.98.7
   0.99.1

 Inconsistencies during region recovery due to close/open of a region during 
 recovery
 

 Key: HBASE-12319
 URL: https://issues.apache.org/jira/browse/HBASE-12319
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7, 0.99.1
Reporter: Devaraj Das
Assignee: Jeffrey Zhong
 Attachments: HBASE-12319.patch


 In one of my test runs, I saw the following:
 {noformat}
 2014-10-14 13:45:30,782 DEBUG 
 [StoreOpener-51af4bd23dc32a940ad2dd5435f00e1d-1] regionserver.HStore: loaded 
 hdfs://hor9n01.gq1.ygridcore.net:8020/apps/hbase/data/data/default/IntegrationTestIngest/51af4bd23dc32a940ad2dd5435f00e1d/test_cf/d6df5cfe15ca41d68c619489fbde4d04,
  isReference=false, isBulkLoadResult=false, seqid=141197, majorCompaction=true
 2014-10-14 13:45:30,788 DEBUG [RS_OPEN_REGION-hor9n01:60020-1] 
 regionserver.HRegion: Found 3 recovered edits file(s) under 
 hdfs://hor9n01.gq1.ygridcore.net:8020/apps/hbase/data/data/default/IntegrationTestIngest/51af4bd23dc32a940ad2dd5435f00e1d
 .
 .
 2014-10-14 13:45:31,916 WARN  [RS_OPEN_REGION-hor9n01:60020-1] 
 regionserver.HRegion: Null or non-existent edits file: 
 hdfs://hor9n01.gq1.ygridcore.net:8020/apps/hbase/data/data/default/IntegrationTestIngest/51af4bd23dc32a940ad2dd5435f00e1d/recovered.edits/0198080
 {noformat}
 The above logs is from a regionserver, say RS2. From the initial analysis it 
 seemed like the master asked a certain regionserver to open the region (let's 
 say RS1) and for some reason asked it to close soon after. The open was still 
 proceeding on RS1 but the master reassigned the region to RS2. This also 
 started the recovery but it ended up seeing an inconsistent view of the 
 recovered-edits files (it reports missing files as per the logs above) since 
 the first regionserver (RS1) deleted some files after it completed the 
 recovery. When RS2 really opens the region, it might not see the recent data 
 that was written by flushes on hor9n10 during the recovery process. Reads of 
 that data would have inconsistencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12319) Inconsistencies during region recovery due to close/open of a region during recovery

2014-10-23 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong updated HBASE-12319:
--
Attachment: HBASE-12319.patch

This patch let AM wait for region open cancelled before re-assigning a region. 
[~jxiang] Could you please take a look? Thanks.

 Inconsistencies during region recovery due to close/open of a region during 
 recovery
 

 Key: HBASE-12319
 URL: https://issues.apache.org/jira/browse/HBASE-12319
 Project: HBase
  Issue Type: Bug
Reporter: Devaraj Das
Assignee: Jeffrey Zhong
 Attachments: HBASE-12319.patch


 In one of my test runs, I saw the following:
 {noformat}
 2014-10-14 13:45:30,782 DEBUG 
 [StoreOpener-51af4bd23dc32a940ad2dd5435f00e1d-1] regionserver.HStore: loaded 
 hdfs://hor9n01.gq1.ygridcore.net:8020/apps/hbase/data/data/default/IntegrationTestIngest/51af4bd23dc32a940ad2dd5435f00e1d/test_cf/d6df5cfe15ca41d68c619489fbde4d04,
  isReference=false, isBulkLoadResult=false, seqid=141197, majorCompaction=true
 2014-10-14 13:45:30,788 DEBUG [RS_OPEN_REGION-hor9n01:60020-1] 
 regionserver.HRegion: Found 3 recovered edits file(s) under 
 hdfs://hor9n01.gq1.ygridcore.net:8020/apps/hbase/data/data/default/IntegrationTestIngest/51af4bd23dc32a940ad2dd5435f00e1d
 .
 .
 2014-10-14 13:45:31,916 WARN  [RS_OPEN_REGION-hor9n01:60020-1] 
 regionserver.HRegion: Null or non-existent edits file: 
 hdfs://hor9n01.gq1.ygridcore.net:8020/apps/hbase/data/data/default/IntegrationTestIngest/51af4bd23dc32a940ad2dd5435f00e1d/recovered.edits/0198080
 {noformat}
 The above logs is from a regionserver, say RS2. From the initial analysis it 
 seemed like the master asked a certain regionserver to open the region (let's 
 say RS1) and for some reason asked it to close soon after. The open was still 
 proceeding on RS1 but the master reassigned the region to RS2. This also 
 started the recovery but it ended up seeing an inconsistent view of the 
 recovered-edits files (it reports missing files as per the logs above) since 
 the first regionserver (RS1) deleted some files after it completed the 
 recovery. When RS2 really opens the region, it might not see the recent data 
 that was written by flushes on hor9n10 during the recovery process. Reads of 
 that data would have inconsistencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12312) Another couple of createTable race conditions

2014-10-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181808#comment-14181808
 ] 

Hadoop QA commented on HBASE-12312:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12676635/HBASE-12312_master_v3%20%281%29.patch
  against trunk revision .
  ATTACHMENT ID: 12676635

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 15 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the trunk's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.replication.TestReplicationDisableInactivePeer

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11453//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11453//artifact/patchprocess/patchReleaseAuditWarnings.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11453//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11453//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11453//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11453//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11453//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11453//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11453//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11453//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11453//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11453//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11453//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11453//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11453//console

This message is automatically generated.

 Another couple of createTable race conditions
 -

 Key: HBASE-12312
 URL: https://issues.apache.org/jira/browse/HBASE-12312
 Project: HBase
  Issue Type: Bug
Reporter: Dima Spivak
Assignee: Dima Spivak
 Attachments: HBASE-12312_master_v1.patch, 
 HBASE-12312_master_v2.patch, HBASE-12312_master_v3 (1).patch, 
 HBASE-12312_master_v3.patch, HBASE-12312_master_v3.patch, 
 HBASE-12312_master_v3.patch


 Found a couple more failing tests in TestAccessController and 
 TestScanEarlyTermination caused by my favorite race condition. :) Will post a 
 patch in a second.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11964) Improve spreading replication load from failed regionservers

2014-10-23 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181835#comment-14181835
 ] 

Lars Hofhansl commented on HBASE-11964:
---

+1

 Improve spreading replication load from failed regionservers
 

 Key: HBASE-11964
 URL: https://issues.apache.org/jira/browse/HBASE-11964
 Project: HBase
  Issue Type: Sub-task
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 2.0.0, 0.98.8, 0.94.25, 0.99.2

 Attachments: HBASE-11964.patch, HBASE-11964.patch, HBASE-11964.patch


 Improve replication source thread handling. Improve fanout when transferring 
 queues. Ensure replication sources terminate properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-8610) Introduce interfaces to support MultiWAL

2014-10-23 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-8610:
---
   Resolution: Duplicate
Fix Version/s: (was: 0.99.2)
 Assignee: (was: ramkrishna.s.vasudevan)
   Status: Resolved  (was: Patch Available)

Current version of HBASE-10378 provides all the updates needed to do multiwal. 

 Introduce interfaces to support MultiWAL
 

 Key: HBASE-8610
 URL: https://issues.apache.org/jira/browse/HBASE-8610
 Project: HBase
  Issue Type: Improvement
  Components: wal
Reporter: ramkrishna.s.vasudevan
 Attachments: HBASE-8610_firstcut.patch


 As the heading says this JIRA is specific to adding interfaces to support 
 MultiWAL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-10378) Divide HLog interface into User and Implementor specific interfaces

2014-10-23 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-10378:

Fix Version/s: 0.99.2
   2.0.0
   Status: Patch Available  (was: In Progress)

patch up on review board that keeps enough compatibility that I think it would 
work for branch-1. (ping [~enis])

The current set of changes to the API should be sufficient to allow for 
multiple wals per region server, so I subsumed HBASE-8610 and attached its last 
targeted version.

 Divide HLog interface into User and Implementor specific interfaces
 ---

 Key: HBASE-10378
 URL: https://issues.apache.org/jira/browse/HBASE-10378
 Project: HBase
  Issue Type: Sub-task
  Components: wal
Reporter: Himanshu Vashishtha
Assignee: Sean Busbey
 Fix For: 2.0.0, 0.99.2

 Attachments: 10378-1.patch, 10378-2.patch


 HBASE-5937 introduces the HLog interface as a first step to support multiple 
 WAL implementations. This interface is a good start, but has some 
 limitations/drawbacks in its current state, such as:
 1) There is no clear distinction b/w User and Implementor APIs, and it 
 provides APIs both for WAL users (append, sync, etc) and also WAL 
 implementors (Reader/Writer interfaces, etc). There are APIs which are very 
 much implementation specific (getFileNum, etc) and a user such as a 
 RegionServer shouldn't know about it.
 2) There are about 14 methods in FSHLog which are not present in HLog 
 interface but are used at several places in the unit test code. These tests 
 typecast HLog to FSHLog, which makes it very difficult to test multiple WAL 
 implementations without doing some ugly checks.
 I'd like to propose some changes in HLog interface that would ease the multi 
 WAL story:
 1) Have two interfaces WAL and WALService. WAL provides APIs for 
 implementors. WALService provides APIs for users (such as RegionServer).
 2) A skeleton implementation of the above two interface as the base class for 
 other WAL implementations (AbstractWAL). It provides required fields for all 
 subclasses (fs, conf, log dir, etc). Make a minimal set of test only methods 
 and add this set in AbstractWAL.
 3) HLogFactory returns a WALService reference when creating a WAL instance; 
 if a user need to access impl specific APIs (there are unit tests which get 
 WAL from a HRegionServer and then call impl specific APIs), use AbstractWAL 
 type casting,
 4) Make TestHLog abstract and let all implementors provide their respective 
 test class which extends TestHLog (TestFSHLog, for example).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5699) Run with 1 WAL in HRegionServer

2014-10-23 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181895#comment-14181895
 ] 

Sean Busbey commented on HBASE-5699:


I have a patch implementing this on top of the refactoring in HBASE-10378. Any 
objections to me taking over the issue and posting it?

 Run with  1 WAL in HRegionServer
 -

 Key: HBASE-5699
 URL: https://issues.apache.org/jira/browse/HBASE-5699
 Project: HBase
  Issue Type: Improvement
  Components: Performance, wal
Reporter: binlijin
Assignee: Li Pi
Priority: Critical
 Attachments: PerfHbase.txt






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12319) Inconsistencies during region recovery due to close/open of a region during recovery

2014-10-23 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181901#comment-14181901
 ] 

Jimmy Xiang commented on HBASE-12319:
-

+1. Looks good to me.

 Inconsistencies during region recovery due to close/open of a region during 
 recovery
 

 Key: HBASE-12319
 URL: https://issues.apache.org/jira/browse/HBASE-12319
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7, 0.99.1
Reporter: Devaraj Das
Assignee: Jeffrey Zhong
 Attachments: HBASE-12319.patch


 In one of my test runs, I saw the following:
 {noformat}
 2014-10-14 13:45:30,782 DEBUG 
 [StoreOpener-51af4bd23dc32a940ad2dd5435f00e1d-1] regionserver.HStore: loaded 
 hdfs://hor9n01.gq1.ygridcore.net:8020/apps/hbase/data/data/default/IntegrationTestIngest/51af4bd23dc32a940ad2dd5435f00e1d/test_cf/d6df5cfe15ca41d68c619489fbde4d04,
  isReference=false, isBulkLoadResult=false, seqid=141197, majorCompaction=true
 2014-10-14 13:45:30,788 DEBUG [RS_OPEN_REGION-hor9n01:60020-1] 
 regionserver.HRegion: Found 3 recovered edits file(s) under 
 hdfs://hor9n01.gq1.ygridcore.net:8020/apps/hbase/data/data/default/IntegrationTestIngest/51af4bd23dc32a940ad2dd5435f00e1d
 .
 .
 2014-10-14 13:45:31,916 WARN  [RS_OPEN_REGION-hor9n01:60020-1] 
 regionserver.HRegion: Null or non-existent edits file: 
 hdfs://hor9n01.gq1.ygridcore.net:8020/apps/hbase/data/data/default/IntegrationTestIngest/51af4bd23dc32a940ad2dd5435f00e1d/recovered.edits/0198080
 {noformat}
 The above logs is from a regionserver, say RS2. From the initial analysis it 
 seemed like the master asked a certain regionserver to open the region (let's 
 say RS1) and for some reason asked it to close soon after. The open was still 
 proceeding on RS1 but the master reassigned the region to RS2. This also 
 started the recovery but it ended up seeing an inconsistent view of the 
 recovered-edits files (it reports missing files as per the logs above) since 
 the first regionserver (RS1) deleted some files after it completed the 
 recovery. When RS2 really opens the region, it might not see the recent data 
 that was written by flushes on hor9n10 during the recovery process. Reads of 
 that data would have inconsistencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11164) Document and test rolling updates from 0.98 - 1.0

2014-10-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181911#comment-14181911
 ] 

stack commented on HBASE-11164:
---

Tried to restart the 0.96 master and got this:

{code}
2014-10-23 11:19:32,251 FATAL 
[master-c2020.halxg.cloudera.com,6,1414088359278] master.HMaster: Unhandled 
exception. Starting shutdown.
java.lang.NullPointerException
at org.apache.hadoop.hbase.util.Bytes.toBytes(Bytes.java:441)
at 
org.apache.hadoop.hbase.zookeeper.ClusterId.setClusterId(ClusterId.java:72)
at 
org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:581)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:433)
at java.lang.Thread.run(Thread.java:745)
{code}

Second time around it worked.

On a good note, the branch-1 shell works with 0.96 cluster though it can't 
'see' the namespace table.  Can scan hbase:meta but not hbase:namespace. 
Digging.


 Document and test rolling updates from 0.98 - 1.0
 --

 Key: HBASE-11164
 URL: https://issues.apache.org/jira/browse/HBASE-11164
 Project: HBase
  Issue Type: Sub-task
Reporter: Enis Soztutar
Assignee: stack
Priority: Critical
 Fix For: 0.99.2


 I think 1.0 should be rolling upgradable from 0.98 unless we break it 
 intentionally for a specific reason. Unless there is such an issue, lets 
 document that 1.0 and 0.98 should be rolling upgrade compatible. 
 We should also test this before the 0.99 release. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10780) HFilePrettyPrinter#processFile should return immediately if file does not exists.

2014-10-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14182017#comment-14182017
 ] 

Ted Yu commented on HBASE-10780:


@Ashish:
Mind attaching an up-to-date patch ?

 HFilePrettyPrinter#processFile should return immediately if file does not 
 exists.
 -

 Key: HBASE-10780
 URL: https://issues.apache.org/jira/browse/HBASE-10780
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.94.11
Reporter: Ashish Singhi
Priority: Minor
 Attachments: HBASE-10780.patch


 HFilePrettyPrinter#processFile should return immediately if file does not 
 exists same like HLogPrettyPrinter#run
 {code}
 if (!fs.exists(file)) {
   System.err.println(ERROR, file doesnt exist:  + file);
 }{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12333) Add Integration Test Running which is more friendly

2014-10-23 Thread Manukranth Kolloju (JIRA)
Manukranth Kolloju created HBASE-12333:
--

 Summary: Add Integration Test Running which is more friendly
 Key: HBASE-12333
 URL: https://issues.apache.org/jira/browse/HBASE-12333
 Project: HBase
  Issue Type: New Feature
  Components: test
Affects Versions: 2.0.0
Reporter: Manukranth Kolloju
Assignee: Manukranth Kolloju
 Fix For: 2.0.0


This Jira is intended to add a Driver class which would run a list of 
Integration tests on an actual hbase cluster. And generate a machine readable 
results file in JSON and put it on HDFS. The idea is to make it easy to run 
driver class using a long list of appropriate command line params and wait for 
the JSON file on the HDFS. This will help in plugging into external automation 
and makes it easier to maintain Continuous Integration Scripts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12322) Add clean up command to ITBLL

2014-10-23 Thread Manukranth Kolloju (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14182137#comment-14182137
 ] 

Manukranth Kolloju commented on HBASE-12322:


+1, The change looks good. Looking forward to use this :)

 Add clean up command to ITBLL
 -

 Key: HBASE-12322
 URL: https://issues.apache.org/jira/browse/HBASE-12322
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Attachments: HBASE-12322.patch


 Right now ITBLL can leave a table and some files on HDFS. It's then up to the 
 user to clean them up. This can be a little messy. Lets give a single command 
 to do that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10780) HFilePrettyPrinter#processFile should return immediately if file does not exists.

2014-10-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14182174#comment-14182174
 ] 

Hadoop QA commented on HBASE-10780:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12635240/HBASE-10780.patch
  against trunk revision .
  ATTACHMENT ID: 12635240

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the trunk's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11456//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11456//artifact/patchprocess/patchReleaseAuditWarnings.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11456//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11456//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11456//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11456//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11456//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11456//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11456//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11456//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11456//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11456//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11456//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11456//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11456//console

This message is automatically generated.

 HFilePrettyPrinter#processFile should return immediately if file does not 
 exists.
 -

 Key: HBASE-10780
 URL: https://issues.apache.org/jira/browse/HBASE-10780
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.94.11
Reporter: Ashish Singhi
Priority: Minor
 Attachments: HBASE-10780.patch


 HFilePrettyPrinter#processFile should return immediately if file does not 
 exists same like HLogPrettyPrinter#run
 {code}
 if (!fs.exists(file)) {
   System.err.println(ERROR, file doesnt exist:  + file);
 }{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10780) HFilePrettyPrinter#processFile should return immediately if file does not exists.

2014-10-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14182180#comment-14182180
 ] 

Ted Yu commented on HBASE-10780:


Currently we have the following when checking option 'w':
{code}
System.err.println(Invalid row is specified.);
System.exit(-1);
{code}
You can use a different negative value.

 HFilePrettyPrinter#processFile should return immediately if file does not 
 exists.
 -

 Key: HBASE-10780
 URL: https://issues.apache.org/jira/browse/HBASE-10780
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.94.11
Reporter: Ashish Singhi
Priority: Minor
 Attachments: HBASE-10780.patch


 HFilePrettyPrinter#processFile should return immediately if file does not 
 exists same like HLogPrettyPrinter#run
 {code}
 if (!fs.exists(file)) {
   System.err.println(ERROR, file doesnt exist:  + file);
 }{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12277) Refactor bulkLoad methods in AccessController to its own interface

2014-10-23 Thread Jeffrey Zhong (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14182211#comment-14182211
 ] 

Jeffrey Zhong commented on HBASE-12277:
---

+1. Looks good to me. Please fix the java doc warnings  make sure the unit 
tests are passed. Thanks.

 Refactor bulkLoad methods in AccessController to its own interface
 --

 Key: HBASE-12277
 URL: https://issues.apache.org/jira/browse/HBASE-12277
 Project: HBase
  Issue Type: Bug
Reporter: Madhan Neethiraj
 Attachments: 
 0001-HBASE-12277-Refactored-bulk-load-methods-from-Access.patch, 
 0002-HBASE-12277-License-text-added-to-the-newly-created-.patch, 
 HBASE-12277-v2.patch, HBASE-12277-v3.patch, HBASE-12277.patch


 SecureBulkLoadEndPoint references couple of methods, prePrepareBulkLoad() and 
 preCleanupBulkLoad(), implemented in AccessController i.e. direct coupling 
 between AccessController and SecureBuikLoadEndPoint classes.
 SecureBulkLoadEndPoint assumes presence of AccessController in 
 secure-cluster. If HBase is configured with another coprocessor for 
 access-control, SecureBulkLoadEndPoint fails with NPE.
 To remove this direct coupling, bulk-load related methods in AccessController 
 should be refactored to an interface; and have AccessController implement 
 this interfaces. SecureBulkLoadEndPoint should then look for coprocessors 
 that implement this interface, instead of directly looking for 
 AccessController.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12333) Add Integration Test Running which is more friendly

2014-10-23 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14182227#comment-14182227
 ] 

Nick Dimiduk commented on HBASE-12333:
--

Have a look at the existing tool: IntegrationTestsDriver.

Also: HBASE-12262

 Add Integration Test Running which is more friendly
 ---

 Key: HBASE-12333
 URL: https://issues.apache.org/jira/browse/HBASE-12333
 Project: HBase
  Issue Type: New Feature
  Components: test
Affects Versions: 2.0.0
Reporter: Manukranth Kolloju
Assignee: Manukranth Kolloju
 Fix For: 2.0.0

   Original Estimate: 48h
  Remaining Estimate: 48h

 This Jira is intended to add a Driver class which would run a list of 
 Integration tests on an actual hbase cluster. And generate a machine readable 
 results file in JSON and put it on HDFS. The idea is to make it easy to run 
 driver class using a long list of appropriate command line params and wait 
 for the JSON file on the HDFS. This will help in plugging into external 
 automation and makes it easier to maintain Continuous Integration Scripts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12333) Add Integration Test Running which is more friendly

2014-10-23 Thread Manukranth Kolloju (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14182241#comment-14182241
 ] 

Manukranth Kolloju commented on HBASE-12333:


The existing test runner doesn't take any configuration parameters to customize 
the way the jobs can be run. Also, we're loosing a lot of information and 
finally returning a 0 or 1. So, I am planning to enhance the existing tool to 
support some of the things that I mentioned above. 

 Add Integration Test Running which is more friendly
 ---

 Key: HBASE-12333
 URL: https://issues.apache.org/jira/browse/HBASE-12333
 Project: HBase
  Issue Type: New Feature
  Components: test
Affects Versions: 2.0.0
Reporter: Manukranth Kolloju
Assignee: Manukranth Kolloju
 Fix For: 2.0.0

   Original Estimate: 48h
  Remaining Estimate: 48h

 This Jira is intended to add a Driver class which would run a list of 
 Integration tests on an actual hbase cluster. And generate a machine readable 
 results file in JSON and put it on HDFS. The idea is to make it easy to run 
 driver class using a long list of appropriate command line params and wait 
 for the JSON file on the HDFS. This will help in plugging into external 
 automation and makes it easier to maintain Continuous Integration Scripts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12333) Add Integration Test Runner which is more friendly

2014-10-23 Thread Manukranth Kolloju (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manukranth Kolloju updated HBASE-12333:
---
Summary: Add Integration Test Runner which is more friendly  (was: Add 
Integration Test Running which is more friendly)

 Add Integration Test Runner which is more friendly
 --

 Key: HBASE-12333
 URL: https://issues.apache.org/jira/browse/HBASE-12333
 Project: HBase
  Issue Type: New Feature
  Components: test
Affects Versions: 2.0.0
Reporter: Manukranth Kolloju
Assignee: Manukranth Kolloju
 Fix For: 2.0.0

   Original Estimate: 48h
  Remaining Estimate: 48h

 This Jira is intended to add a Driver class which would run a list of 
 Integration tests on an actual hbase cluster. And generate a machine readable 
 results file in JSON and put it on HDFS. The idea is to make it easy to run 
 driver class using a long list of appropriate command line params and wait 
 for the JSON file on the HDFS. This will help in plugging into external 
 automation and makes it easier to maintain Continuous Integration Scripts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12328) Need to separate JvmMetrics for Master and RegionServer

2014-10-23 Thread Sanghyun Yun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanghyun Yun updated HBASE-12328:
-
Attachment: HBASE-12328.2.patch

Thanks for your review, [~eclark].
I added test code.

 Need to separate JvmMetrics for Master and RegionServer
 ---

 Key: HBASE-12328
 URL: https://issues.apache.org/jira/browse/HBASE-12328
 Project: HBase
  Issue Type: Improvement
Reporter: Sanghyun Yun
Priority: Minor
 Attachments: HBASE-12328.2.patch, HBASE-12328.patch


 tag.ProcessName of JvmMetrics is IPC.
 It is same both Master and RegionServer.
 {code:title=HBase(Master and RegionServer)'s Metrics Dump}
 ...
 name: Hadoop:service=HBase,name=JvmMetrics,
 modelerType: JvmMetrics,
 tag.Context: jvm,
 tag.ProcessName: IPC,
 tag.SessionId: ,
 ...
 {code}
 When I use HBase with Ganglia,
 I wrote tagsForPrefix.jvm=ProcessName in hadoop-metrics2-hbase.properties.
 {code:title=hadoop-metrics2-hbase.properties}
 ...
 *.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
 hbase.sink.ganglia.tagsForPrefix.jvm=ProcessName
 ...
 {code}
 But, Ganglia generate only one RRD file because tag.ProcessName is IPC both 
 Master and Regionserver.
 I think it need to separate JvmMetrics for Master and RegionServer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12328) Need to separate JvmMetrics for Master and RegionServer

2014-10-23 Thread Sanghyun Yun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14182263#comment-14182263
 ] 

Sanghyun Yun commented on HBASE-12328:
--

Thanks for your review, [~stack].
I think it's same issue.
It will be changed to Hadoop:service=HBase,name=Master,sub=IPC.
{code}
name: Hadoop:service=HBase,name=Master,sub=IPC,
modelerType: Master,sub=IPC,
tag.Context: master,
{code}

 Need to separate JvmMetrics for Master and RegionServer
 ---

 Key: HBASE-12328
 URL: https://issues.apache.org/jira/browse/HBASE-12328
 Project: HBase
  Issue Type: Improvement
Reporter: Sanghyun Yun
Priority: Minor
 Attachments: HBASE-12328.2.patch, HBASE-12328.patch


 tag.ProcessName of JvmMetrics is IPC.
 It is same both Master and RegionServer.
 {code:title=HBase(Master and RegionServer)'s Metrics Dump}
 ...
 name: Hadoop:service=HBase,name=JvmMetrics,
 modelerType: JvmMetrics,
 tag.Context: jvm,
 tag.ProcessName: IPC,
 tag.SessionId: ,
 ...
 {code}
 When I use HBase with Ganglia,
 I wrote tagsForPrefix.jvm=ProcessName in hadoop-metrics2-hbase.properties.
 {code:title=hadoop-metrics2-hbase.properties}
 ...
 *.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
 hbase.sink.ganglia.tagsForPrefix.jvm=ProcessName
 ...
 {code}
 But, Ganglia generate only one RRD file because tag.ProcessName is IPC both 
 Master and Regionserver.
 I think it need to separate JvmMetrics for Master and RegionServer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12334) Handling of DeserializationException causes needless retry on failure

2014-10-23 Thread James Taylor (JIRA)
James Taylor created HBASE-12334:


 Summary: Handling of DeserializationException causes needless 
retry on failure
 Key: HBASE-12334
 URL: https://issues.apache.org/jira/browse/HBASE-12334
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: James Taylor


If an unexpected exception occurs when deserialization occurs for a custom 
filter, the exception gets wrapped in a DeserializationException. Since this 
exception is in turn wrapped in an IOException, the many loop retry logic kicks 
in. The net effect is that this same deserialization error occurs again and 
again as the retries occur, just causing the client to wait needlessly.

IMO, either the parseFrom methods should be allowed to throw whatever type of 
IOException they'd like, in which case they could throw a 
DoNotRetryIOException, or a DeserializationException should be wrapped in a 
DoNotRetryIOException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12332) [mob] use filelink instad of retry when resolving an hfilelink.

2014-10-23 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14182317#comment-14182317
 ] 

Jingcheng Du commented on HBASE-12332:
--

Do we need to do this for snapshot?
Actually in the mob, the hfilelinks are not used in the read path. The Mob has 
four candidate paths in reading after the snapshot ( mob working, mob archive, 
source mob working and source mob archive), these paths are known without 
parsing the hfilelink. So is it necessary to detour through the hfilelink?
One more thing is HFileLink can be only parsed its name to a hbase 
working/archive dir, it's a new problem to let it parse the file name to a mob 
dir.
Please advise. Thanks.

 [mob] use filelink instad of retry when resolving an hfilelink.
 ---

 Key: HBASE-12332
 URL: https://issues.apache.org/jira/browse/HBASE-12332
 Project: HBase
  Issue Type: Sub-task
  Components: mob
Affects Versions: hbase-11339
Reporter: Jonathan Hsieh
 Fix For: hbase-11339


 in the snapshot code, hmobstore was modified to traverse an hfile link to a 
 mob.   Ideally this should use the transparent filelink code to read the data.
 Also there will likely be some issues with the mob file cache with these 
 links.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12334) Handling of DeserializationException causes needless retry on failure

2014-10-23 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14182321#comment-14182321
 ] 

Lars Hofhansl commented on HBASE-12334:
---

I agree. Do you have a stack trace? I find it a bit hard to find the exact 
place where the DeserializationException is wrapped.

 Handling of DeserializationException causes needless retry on failure
 -

 Key: HBASE-12334
 URL: https://issues.apache.org/jira/browse/HBASE-12334
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: James Taylor
  Labels: Phoenix

 If an unexpected exception occurs when deserialization occurs for a custom 
 filter, the exception gets wrapped in a DeserializationException. Since this 
 exception is in turn wrapped in an IOException, the many loop retry logic 
 kicks in. The net effect is that this same deserialization error occurs again 
 and again as the retries occur, just causing the client to wait needlessly.
 IMO, either the parseFrom methods should be allowed to throw whatever type of 
 IOException they'd like, in which case they could throw a 
 DoNotRetryIOException, or a DeserializationException should be wrapped in a 
 DoNotRetryIOException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12335) IntegrationTestRegionReplicaPerf is flakey

2014-10-23 Thread Nick Dimiduk (JIRA)
Nick Dimiduk created HBASE-12335:


 Summary: IntegrationTestRegionReplicaPerf is flakey
 Key: HBASE-12335
 URL: https://issues.apache.org/jira/browse/HBASE-12335
 Project: HBase
  Issue Type: Test
  Components: test
Reporter: Nick Dimiduk


I find that this test often fails; the assertion that running with read 
replicas should complete faster than without is usually false. I need to 
investigate further as to why this is the case and how we should tune it.

In the mean time, I'd like to change the test to assert instead on the average 
of the stdev across all the test runs in each category. Meaning, enabling this 
feature should reduce the overall latency variance experienced by the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12335) IntegrationTestRegionReplicaPerf is flakey

2014-10-23 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14182324#comment-14182324
 ] 

Nick Dimiduk commented on HBASE-12335:
--

It *should* also be the case that the average of the 99.9pct latency should be 
lower with this feature enabled. I'm not seeing this consistently, at least 
with a cluster of 3 RS's with a region replica factor of 3.

 IntegrationTestRegionReplicaPerf is flakey
 --

 Key: HBASE-12335
 URL: https://issues.apache.org/jira/browse/HBASE-12335
 Project: HBase
  Issue Type: Test
  Components: test
Reporter: Nick Dimiduk

 I find that this test often fails; the assertion that running with read 
 replicas should complete faster than without is usually false. I need to 
 investigate further as to why this is the case and how we should tune it.
 In the mean time, I'd like to change the test to assert instead on the 
 average of the stdev across all the test runs in each category. Meaning, 
 enabling this feature should reduce the overall latency variance experienced 
 by the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12335) IntegrationTestRegionReplicaPerf is flaky

2014-10-23 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-12335:
-
Summary: IntegrationTestRegionReplicaPerf is flaky  (was: 
IntegrationTestRegionReplicaPerf is flakey)

 IntegrationTestRegionReplicaPerf is flaky
 -

 Key: HBASE-12335
 URL: https://issues.apache.org/jira/browse/HBASE-12335
 Project: HBase
  Issue Type: Test
  Components: test
Reporter: Nick Dimiduk

 I find that this test often fails; the assertion that running with read 
 replicas should complete faster than without is usually false. I need to 
 investigate further as to why this is the case and how we should tune it.
 In the mean time, I'd like to change the test to assert instead on the 
 average of the stdev across all the test runs in each category. Meaning, 
 enabling this feature should reduce the overall latency variance experienced 
 by the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11164) Document and test rolling updates from 0.98 - 1.0

2014-10-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14182330#comment-14182330
 ] 

stack commented on HBASE-11164:
---

There is no support in 0.96 for HBASE-4811 Support reverse Scan.  It was 
backported to 0.94 but not to 0.96.  HBASE-10018 Remove region location 
prefetching added making use of reverse scanning doing lookups against meta.  
This makes it so we cannot do a rolling restart from 0.96 to 1.0.  Let me add 
note to refguide.



 Document and test rolling updates from 0.98 - 1.0
 --

 Key: HBASE-11164
 URL: https://issues.apache.org/jira/browse/HBASE-11164
 Project: HBase
  Issue Type: Sub-task
Reporter: Enis Soztutar
Assignee: stack
Priority: Critical
 Fix For: 0.99.2


 I think 1.0 should be rolling upgradable from 0.98 unless we break it 
 intentionally for a specific reason. Unless there is such an issue, lets 
 document that 1.0 and 0.98 should be rolling upgrade compatible. 
 We should also test this before the 0.99 release. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12328) Need to separate JvmMetrics for Master and RegionServer

2014-10-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14182340#comment-14182340
 ] 

Hadoop QA commented on HBASE-12328:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12676799/HBASE-12328.2.patch
  against trunk revision .
  ATTACHMENT ID: 12676799

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the trunk's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11457//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11457//artifact/patchprocess/patchReleaseAuditWarnings.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11457//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11457//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11457//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11457//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11457//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11457//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11457//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11457//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11457//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11457//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11457//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11457//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11457//console

This message is automatically generated.

 Need to separate JvmMetrics for Master and RegionServer
 ---

 Key: HBASE-12328
 URL: https://issues.apache.org/jira/browse/HBASE-12328
 Project: HBase
  Issue Type: Improvement
Reporter: Sanghyun Yun
Priority: Minor
 Attachments: HBASE-12328.2.patch, HBASE-12328.patch


 tag.ProcessName of JvmMetrics is IPC.
 It is same both Master and RegionServer.
 {code:title=HBase(Master and RegionServer)'s Metrics Dump}
 ...
 name: Hadoop:service=HBase,name=JvmMetrics,
 modelerType: JvmMetrics,
 tag.Context: jvm,
 tag.ProcessName: IPC,
 tag.SessionId: ,
 ...
 {code}
 When I use HBase with Ganglia,
 I wrote tagsForPrefix.jvm=ProcessName in hadoop-metrics2-hbase.properties.
 {code:title=hadoop-metrics2-hbase.properties}
 ...
 *.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
 hbase.sink.ganglia.tagsForPrefix.jvm=ProcessName
 ...
 {code}
 But, Ganglia generate only one RRD file because tag.ProcessName is IPC both 
 Master and Regionserver.
 I think it need to separate JvmMetrics for Master and RegionServer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12331) Shorten the mob snapshot unit tests

2014-10-23 Thread Li Jiajia (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14182349#comment-14182349
 ] 

Li Jiajia commented on HBASE-12331:
---

[~jmhsieh]all the snapshot tests as integration tests, or only partially (the 
UT takes time more than 100s)?

 Shorten the mob snapshot unit tests
 ---

 Key: HBASE-12331
 URL: https://issues.apache.org/jira/browse/HBASE-12331
 Project: HBase
  Issue Type: Sub-task
  Components: mob
Affects Versions: hbase-11339
Reporter: Jonathan Hsieh
 Fix For: hbase-11339


 The mob snapshot patch introduced a whole log of tests that take a long time 
 to run and would be better as integration tests.
 {code}
 ---
  T E S T S
 ---
 Running 
 org.apache.hadoop.hbase.client.TestMobRestoreSnapshotFromClientWithRegionReplicas
 Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 394.803 sec - 
 in 
 org.apache.hadoop.hbase.client.TestMobRestoreSnapshotFromClientWithRegionReplicas
 Running org.apache.hadoop.hbase.client.TestMobRestoreSnapshotFromClient
 Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 212.377 sec - 
 in org.apache.hadoop.hbase.client.TestMobRestoreSnapshotFromClient
 Running 
 org.apache.hadoop.hbase.client.TestMobSnapshotFromClientWithRegionReplicas
 Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.463 sec - 
 in org.apache.hadoop.hbase.client.TestMobSnapshotFromClientWithRegionReplicas
 Running org.apache.hadoop.hbase.client.TestMobSnapshotFromClient
 Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.724 sec - 
 in org.apache.hadoop.hbase.client.TestMobSnapshotFromClient
 Running org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient
 Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 204.03 sec - 
 in org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient
 Running 
 org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClientWithRegionReplicas
 Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 214.052 sec - 
 in 
 org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClientWithRegionReplicas
 Running org.apache.hadoop.hbase.client.TestMobSnapshotCloneIndependence
 Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 105.139 sec - 
 in org.apache.hadoop.hbase.client.TestMobSnapshotCloneIndependence
 Running org.apache.hadoop.hbase.regionserver.TestMobStoreScanner
 Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.42 sec - 
 in org.apache.hadoop.hbase.regionserver.TestMobStoreScanner
 Running org.apache.hadoop.hbase.regionserver.TestDeleteMobTable
 Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.136 sec - 
 in org.apache.hadoop.hbase.regionserver.TestDeleteMobTable
 Running org.apache.hadoop.hbase.regionserver.TestHMobStore
 Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.09 sec - in 
 org.apache.hadoop.hbase.regionserver.TestHMobStore
 Running org.apache.hadoop.hbase.regionserver.TestMobCompaction
 Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.629 sec - 
 in org.apache.hadoop.hbase.regionserver.TestMobCompaction
 Running org.apache.hadoop.hbase.mob.TestCachedMobFile
 Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.301 sec - 
 in org.apache.hadoop.hbase.mob.TestCachedMobFile
 Running org.apache.hadoop.hbase.mob.mapreduce.TestMobSweepJob
 Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.752 sec - 
 in org.apache.hadoop.hbase.mob.mapreduce.TestMobSweepJob
 Running org.apache.hadoop.hbase.mob.mapreduce.TestMobSweepReducer
 Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.276 sec - 
 in org.apache.hadoop.hbase.mob.mapreduce.TestMobSweepReducer
 Running org.apache.hadoop.hbase.mob.mapreduce.TestMobSweepMapper
 Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.46 sec - 
 in org.apache.hadoop.hbase.mob.mapreduce.TestMobSweepMapper
 Running org.apache.hadoop.hbase.mob.mapreduce.TestMobSweeper
 Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 173.05 sec - 
 in org.apache.hadoop.hbase.mob.mapreduce.TestMobSweeper
 Running org.apache.hadoop.hbase.mob.TestMobDataBlockEncoding
 Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.86 sec - 
 in org.apache.hadoop.hbase.mob.TestMobDataBlockEncoding
 Running org.apache.hadoop.hbase.mob.TestExpiredMobFileCleaner
 Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.029 sec - 
 in org.apache.hadoop.hbase.mob.TestExpiredMobFileCleaner
 Running org.apache.hadoop.hbase.mob.TestMobFile
 Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.562 sec - 
 in org.apache.hadoop.hbase.mob.TestMobFile
 Running 

[jira] [Commented] (HBASE-12334) Handling of DeserializationException causes needless retry on failure

2014-10-23 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14182371#comment-14182371
 ] 

James Taylor commented on HBASE-12334:
--

Not sure which one you want - I'm tracing it in a debugger. There's enough 
throwing and catching there to make the Giants proud! :-)

Here's a few of them:

{code}
Daemon Thread [defaultRpcServer.handler=4,queue=0,port=50594] (Suspended)   
ProtobufUtil.toFilter(FilterProtos$Filter) line: 1362   
FilterList.parseFrom(byte[]) line: 403  
NativeMethodAccessorImpl.invoke0(Method, Object, Object[]) line: not 
available [native method]  
NativeMethodAccessorImpl.invoke(Object, Object[]) line: 39  
DelegatingMethodAccessorImpl.invoke(Object, Object[]) line: 25  
Method.invoke(Object, Object...) line: 597  
ProtobufUtil.toFilter(FilterProtos$Filter) line: 1360   
ProtobufUtil.toScan(ClientProtos$Scan) line: 917

MiniHBaseCluster$MiniHBaseClusterRegionServer(HRegionServer).scan(RpcController,
 ClientProtos$ScanRequest) line: 3078   

ClientProtos$ClientService$2.callBlockingMethod(Descriptors$MethodDescriptor, 
RpcController, Message) line: 29497   
RpcServer.call(BlockingService, MethodDescriptor, Message, CellScanner, 
long, MonitoredRPCHandler) line: 2027   
CallRunner.run() line: 98   
{code}

and then this:

{code}
Daemon Thread [defaultRpcServer.handler=4,queue=0,port=50594] (Suspended)   
FilterList.parseFrom(byte[]) line: 406  
NativeMethodAccessorImpl.invoke0(Method, Object, Object[]) line: not 
available [native method]  
NativeMethodAccessorImpl.invoke(Object, Object[]) line: 39  
DelegatingMethodAccessorImpl.invoke(Object, Object[]) line: 25  
Method.invoke(Object, Object...) line: 597  
ProtobufUtil.toFilter(FilterProtos$Filter) line: 1360   
ProtobufUtil.toScan(ClientProtos$Scan) line: 917

MiniHBaseCluster$MiniHBaseClusterRegionServer(HRegionServer).scan(RpcController,
 ClientProtos$ScanRequest) line: 3078   

ClientProtos$ClientService$2.callBlockingMethod(Descriptors$MethodDescriptor, 
RpcController, Message) line: 29497   
RpcServer.call(BlockingService, MethodDescriptor, Message, CellScanner, 
long, MonitoredRPCHandler) line: 2027   
CallRunner.run() line: 98   
{code}

and finally this:
{code}
Daemon Thread [defaultRpcServer.handler=4,queue=0,port=50594] (Suspended)   

MiniHBaseCluster$MiniHBaseClusterRegionServer(HRegionServer).scan(RpcController,
 ClientProtos$ScanRequest) line: 3239   

ClientProtos$ClientService$2.callBlockingMethod(Descriptors$MethodDescriptor, 
RpcController, Message) line: 29497   
RpcServer.call(BlockingService, MethodDescriptor, Message, CellScanner, 
long, MonitoredRPCHandler) line: 2027   
CallRunner.run() line: 98   
{code}

 Handling of DeserializationException causes needless retry on failure
 -

 Key: HBASE-12334
 URL: https://issues.apache.org/jira/browse/HBASE-12334
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: James Taylor
  Labels: Phoenix

 If an unexpected exception occurs when deserialization occurs for a custom 
 filter, the exception gets wrapped in a DeserializationException. Since this 
 exception is in turn wrapped in an IOException, the many loop retry logic 
 kicks in. The net effect is that this same deserialization error occurs again 
 and again as the retries occur, just causing the client to wait needlessly.
 IMO, either the parseFrom methods should be allowed to throw whatever type of 
 IOException they'd like, in which case they could throw a 
 DoNotRetryIOException, or a DeserializationException should be wrapped in a 
 DoNotRetryIOException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12277) Refactor bulkLoad methods in AccessController to its own interface

2014-10-23 Thread Madhan Neethiraj (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Madhan Neethiraj updated HBASE-12277:
-
Status: Open  (was: Patch Available)

 Refactor bulkLoad methods in AccessController to its own interface
 --

 Key: HBASE-12277
 URL: https://issues.apache.org/jira/browse/HBASE-12277
 Project: HBase
  Issue Type: Bug
Reporter: Madhan Neethiraj
 Attachments: 
 0001-HBASE-12277-Refactored-bulk-load-methods-from-Access.patch, 
 0002-HBASE-12277-License-text-added-to-the-newly-created-.patch, 
 HBASE-12277-v2.patch, HBASE-12277-v3.patch, HBASE-12277.patch


 SecureBulkLoadEndPoint references couple of methods, prePrepareBulkLoad() and 
 preCleanupBulkLoad(), implemented in AccessController i.e. direct coupling 
 between AccessController and SecureBuikLoadEndPoint classes.
 SecureBulkLoadEndPoint assumes presence of AccessController in 
 secure-cluster. If HBase is configured with another coprocessor for 
 access-control, SecureBulkLoadEndPoint fails with NPE.
 To remove this direct coupling, bulk-load related methods in AccessController 
 should be refactored to an interface; and have AccessController implement 
 this interfaces. SecureBulkLoadEndPoint should then look for coprocessors 
 that implement this interface, instead of directly looking for 
 AccessController.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12277) Refactor bulkLoad methods in AccessController to its own interface

2014-10-23 Thread Madhan Neethiraj (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Madhan Neethiraj updated HBASE-12277:
-
Attachment: HBASE-12277-v4.patch

Fixes for javadoc warnings by Hadoop QA.

 Refactor bulkLoad methods in AccessController to its own interface
 --

 Key: HBASE-12277
 URL: https://issues.apache.org/jira/browse/HBASE-12277
 Project: HBase
  Issue Type: Bug
Reporter: Madhan Neethiraj
 Attachments: 
 0001-HBASE-12277-Refactored-bulk-load-methods-from-Access.patch, 
 0002-HBASE-12277-License-text-added-to-the-newly-created-.patch, 
 HBASE-12277-v2.patch, HBASE-12277-v3.patch, HBASE-12277-v4.patch, 
 HBASE-12277.patch


 SecureBulkLoadEndPoint references couple of methods, prePrepareBulkLoad() and 
 preCleanupBulkLoad(), implemented in AccessController i.e. direct coupling 
 between AccessController and SecureBuikLoadEndPoint classes.
 SecureBulkLoadEndPoint assumes presence of AccessController in 
 secure-cluster. If HBase is configured with another coprocessor for 
 access-control, SecureBulkLoadEndPoint fails with NPE.
 To remove this direct coupling, bulk-load related methods in AccessController 
 should be refactored to an interface; and have AccessController implement 
 this interfaces. SecureBulkLoadEndPoint should then look for coprocessors 
 that implement this interface, instead of directly looking for 
 AccessController.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12277) Refactor bulkLoad methods in AccessController to its own interface

2014-10-23 Thread Madhan Neethiraj (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Madhan Neethiraj updated HBASE-12277:
-
Status: Patch Available  (was: Open)

fixed javadoc warnings.

 Refactor bulkLoad methods in AccessController to its own interface
 --

 Key: HBASE-12277
 URL: https://issues.apache.org/jira/browse/HBASE-12277
 Project: HBase
  Issue Type: Bug
Reporter: Madhan Neethiraj
 Attachments: 
 0001-HBASE-12277-Refactored-bulk-load-methods-from-Access.patch, 
 0002-HBASE-12277-License-text-added-to-the-newly-created-.patch, 
 HBASE-12277-v2.patch, HBASE-12277-v3.patch, HBASE-12277-v4.patch, 
 HBASE-12277.patch


 SecureBulkLoadEndPoint references couple of methods, prePrepareBulkLoad() and 
 preCleanupBulkLoad(), implemented in AccessController i.e. direct coupling 
 between AccessController and SecureBuikLoadEndPoint classes.
 SecureBulkLoadEndPoint assumes presence of AccessController in 
 secure-cluster. If HBase is configured with another coprocessor for 
 access-control, SecureBulkLoadEndPoint fails with NPE.
 To remove this direct coupling, bulk-load related methods in AccessController 
 should be refactored to an interface; and have AccessController implement 
 this interfaces. SecureBulkLoadEndPoint should then look for coprocessors 
 that implement this interface, instead of directly looking for 
 AccessController.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12329) Table create with duplicate column family names quietly succeeds

2014-10-23 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14182381#comment-14182381
 ] 

Jingcheng Du commented on HBASE-12329:
--

Hi [~busbey], can I work on this?

 Table create with duplicate column family names quietly succeeds
 

 Key: HBASE-12329
 URL: https://issues.apache.org/jira/browse/HBASE-12329
 Project: HBase
  Issue Type: Bug
  Components: Client, shell
Reporter: Sean Busbey
Priority: Minor

 From the mailing list
 {quote}
 I was expecting that it is forbidden, **but** this call does not throw any
 exception
 {code}
 String[] families = {cf, cf};
 HTableDescriptor desc = new HTableDescriptor(name);
 for (String cf : families) {
   HColumnDescriptor coldef = new HColumnDescriptor(cf);
   desc.addFamily(coldef);
 }
 try {
 admin.createTable(desc);
 } catch (TableExistsException e) {
 throw new IOException(table \' + name + \' already exists);
 }
 {code}
 {quote}
 And Ted's follow up replicates in the shell
 {code}
 hbase(main):001:0 create 't2', {NAME = 'f1'}, {NAME = 'f1'}
 The table got created - with 1 column family:
 hbase(main):002:0 describe 't2'
 DESCRIPTION
ENABLED
  't2', {NAME = 'f1', DATA_BLOCK_ENCODING = 'NONE', BLOOMFILTER = 'ROW',
 REPLICATION_SCOPE = '0 true
  ', VERSIONS = '1', COMPRESSION = 'NONE', MIN_VERSIONS = '0', TTL =
 '2147483647', KEEP_DELETED
  _CELLS = 'false', BLOCKSIZE = '65536', IN_MEMORY = 'false', BLOCKCACHE
 = 'true'}
 1 row(s) in 0.1000 seconds
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12322) Add clean up command to ITBLL

2014-10-23 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12322:
--
   Resolution: Fixed
Fix Version/s: 0.99.2
   2.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to branch-1+ for [~eclark]

 Add clean up command to ITBLL
 -

 Key: HBASE-12322
 URL: https://issues.apache.org/jira/browse/HBASE-12322
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12322.patch


 Right now ITBLL can leave a table and some files on HDFS. It's then up to the 
 user to clean them up. This can be a little messy. Lets give a single command 
 to do that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12334) Handling of DeserializationException causes needless retry on failure

2014-10-23 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14182408#comment-14182408
 ] 

Lars Hofhansl commented on HBASE-12334:
---

Ah, that's why I didn't see it. The invoke throws an InvocationTargetException.

 Handling of DeserializationException causes needless retry on failure
 -

 Key: HBASE-12334
 URL: https://issues.apache.org/jira/browse/HBASE-12334
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: James Taylor
  Labels: Phoenix

 If an unexpected exception occurs when deserialization occurs for a custom 
 filter, the exception gets wrapped in a DeserializationException. Since this 
 exception is in turn wrapped in an IOException, the many loop retry logic 
 kicks in. The net effect is that this same deserialization error occurs again 
 and again as the retries occur, just causing the client to wait needlessly.
 IMO, either the parseFrom methods should be allowed to throw whatever type of 
 IOException they'd like, in which case they could throw a 
 DoNotRetryIOException, or a DeserializationException should be wrapped in a 
 DoNotRetryIOException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12334) Handling of DeserializationException causes needless retry on failure

2014-10-23 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-12334:
--
Attachment: 12334-0.98.txt

Maybe just this.
The last exception is probably just downstream from a toFilter call.

 Handling of DeserializationException causes needless retry on failure
 -

 Key: HBASE-12334
 URL: https://issues.apache.org/jira/browse/HBASE-12334
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: James Taylor
  Labels: Phoenix
 Attachments: 12334-0.98.txt


 If an unexpected exception occurs when deserialization occurs for a custom 
 filter, the exception gets wrapped in a DeserializationException. Since this 
 exception is in turn wrapped in an IOException, the many loop retry logic 
 kicks in. The net effect is that this same deserialization error occurs again 
 and again as the retries occur, just causing the client to wait needlessly.
 IMO, either the parseFrom methods should be allowed to throw whatever type of 
 IOException they'd like, in which case they could throw a 
 DoNotRetryIOException, or a DeserializationException should be wrapped in a 
 DoNotRetryIOException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12327) MetricsHBaseServerSourceFactory#createContextName has wrong conditions

2014-10-23 Thread Sanghyun Yun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14182416#comment-14182416
 ] 

Sanghyun Yun commented on HBASE-12327:
--

Thanks for your review, [~tedyu].
I'm concerned about side effects.
Do you think it's better to remove old conditions?

 MetricsHBaseServerSourceFactory#createContextName has wrong conditions
 --

 Key: HBASE-12327
 URL: https://issues.apache.org/jira/browse/HBASE-12327
 Project: HBase
  Issue Type: Bug
Reporter: Sanghyun Yun
 Attachments: HBASE-12327.patch


 MetricsHBaseServerSourceFactory#createContextName has wrong conditions.
 It checks serverName contains HMaster or HRegion.
 {code:title=MetricsHBaseServerSourceFactory.java}
 ...
   protected static String createContextName(String serverName) {
 if (serverName.contains(HMaster)) {
   return Master;
 } else if (serverName.contains(HRegion)) {
   return RegionServer;
 }
 return IPC;
   }
 ...
 {code}
 But, passed serverName actually contains master or regionserver by 
 HMaster#getProcessName and HRegionServer#getProcessName.
 {code:title=HMaster.java}
 ...
   // MASTER is name of the webapp and the attribute name used stuffing this
   //instance into web context.
   public static final String MASTER = master;
 ...
   protected String getProcessName() {
 return MASTER;
   }
 ...
 {code}
 {code:title=HRegionServer.java}
 ...
   /** region server process name */
   public static final String REGIONSERVER = regionserver;
 ...
   protected String getProcessName() {
 return REGIONSERVER;
   }
 ...
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12329) Table create with duplicate column family names quietly succeeds

2014-10-23 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14182423#comment-14182423
 ] 

Sean Busbey commented on HBASE-12329:
-

yes, please do! I'd be happy to provide a review when you have something.

 Table create with duplicate column family names quietly succeeds
 

 Key: HBASE-12329
 URL: https://issues.apache.org/jira/browse/HBASE-12329
 Project: HBase
  Issue Type: Bug
  Components: Client, shell
Reporter: Sean Busbey
Priority: Minor

 From the mailing list
 {quote}
 I was expecting that it is forbidden, **but** this call does not throw any
 exception
 {code}
 String[] families = {cf, cf};
 HTableDescriptor desc = new HTableDescriptor(name);
 for (String cf : families) {
   HColumnDescriptor coldef = new HColumnDescriptor(cf);
   desc.addFamily(coldef);
 }
 try {
 admin.createTable(desc);
 } catch (TableExistsException e) {
 throw new IOException(table \' + name + \' already exists);
 }
 {code}
 {quote}
 And Ted's follow up replicates in the shell
 {code}
 hbase(main):001:0 create 't2', {NAME = 'f1'}, {NAME = 'f1'}
 The table got created - with 1 column family:
 hbase(main):002:0 describe 't2'
 DESCRIPTION
ENABLED
  't2', {NAME = 'f1', DATA_BLOCK_ENCODING = 'NONE', BLOOMFILTER = 'ROW',
 REPLICATION_SCOPE = '0 true
  ', VERSIONS = '1', COMPRESSION = 'NONE', MIN_VERSIONS = '0', TTL =
 '2147483647', KEEP_DELETED
  _CELLS = 'false', BLOCKSIZE = '65536', IN_MEMORY = 'false', BLOCKCACHE
 = 'true'}
 1 row(s) in 0.1000 seconds
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-11164) Document and test rolling updates from 0.98 - 1.0

2014-10-23 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-11164.
---
  Resolution: Fixed
Hadoop Flags: Reviewed

I just ran the roling upgrade script and it worked.  Did on small cluster.  No 
exceptoins in logs other than expected when RSs go away for a while as they are 
rolling graded.  Did this after changing symlink to point at new software:

HADOOP_HOME=~/hadoop-2.6.0-CRC-SNAPSHOT ~/hbase/bin/rolling-restart.sh --config 
~/conf_hbase

Add more doc into upgrade on what a rolling upgrade is and added some to the 
0.98 to 1.0 section.

Resolving as done.

 Document and test rolling updates from 0.98 - 1.0
 --

 Key: HBASE-11164
 URL: https://issues.apache.org/jira/browse/HBASE-11164
 Project: HBase
  Issue Type: Sub-task
Reporter: Enis Soztutar
Assignee: stack
Priority: Critical
 Fix For: 0.99.2


 I think 1.0 should be rolling upgradable from 0.98 unless we break it 
 intentionally for a specific reason. Unless there is such an issue, lets 
 document that 1.0 and 0.98 should be rolling upgrade compatible. 
 We should also test this before the 0.99 release. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10856) Prep for 1.0

2014-10-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14182426#comment-14182426
 ] 

stack commented on HBASE-10856:
---

Removed  HBASE-10403 Simplify offheap cache configuration link as it has fix 
version 2.0.
Ditto for HBASE-10504 Define Replication Interface

 Prep for 1.0
 

 Key: HBASE-10856
 URL: https://issues.apache.org/jira/browse/HBASE-10856
 Project: HBase
  Issue Type: Umbrella
Reporter: stack
 Fix For: 0.99.2


 Tasks for 1.0 copied here from our '1.0.0' mailing list discussion.  Idea is 
 to file subtasks off this one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12334) Handling of DeserializationException causes needless retry on failure

2014-10-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14182427#comment-14182427
 ] 

stack commented on HBASE-12334:
---

+1 That'll fix James' issue.  Could do w/ a more general soln doing this for 
all parseFroms but.. how about getting this in for now?

 Handling of DeserializationException causes needless retry on failure
 -

 Key: HBASE-12334
 URL: https://issues.apache.org/jira/browse/HBASE-12334
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: James Taylor
  Labels: Phoenix
 Attachments: 12334-0.98.txt


 If an unexpected exception occurs when deserialization occurs for a custom 
 filter, the exception gets wrapped in a DeserializationException. Since this 
 exception is in turn wrapped in an IOException, the many loop retry logic 
 kicks in. The net effect is that this same deserialization error occurs again 
 and again as the retries occur, just causing the client to wait needlessly.
 IMO, either the parseFrom methods should be allowed to throw whatever type of 
 IOException they'd like, in which case they could throw a 
 DoNotRetryIOException, or a DeserializationException should be wrapped in a 
 DoNotRetryIOException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12327) MetricsHBaseServerSourceFactory#createContextName has wrong conditions

2014-10-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14182433#comment-14182433
 ] 

Ted Yu commented on HBASE-12327:


Can you add (temporary) logs to see whether the old ones are ever passed ?

Thanks

 MetricsHBaseServerSourceFactory#createContextName has wrong conditions
 --

 Key: HBASE-12327
 URL: https://issues.apache.org/jira/browse/HBASE-12327
 Project: HBase
  Issue Type: Bug
Reporter: Sanghyun Yun
 Attachments: HBASE-12327.patch


 MetricsHBaseServerSourceFactory#createContextName has wrong conditions.
 It checks serverName contains HMaster or HRegion.
 {code:title=MetricsHBaseServerSourceFactory.java}
 ...
   protected static String createContextName(String serverName) {
 if (serverName.contains(HMaster)) {
   return Master;
 } else if (serverName.contains(HRegion)) {
   return RegionServer;
 }
 return IPC;
   }
 ...
 {code}
 But, passed serverName actually contains master or regionserver by 
 HMaster#getProcessName and HRegionServer#getProcessName.
 {code:title=HMaster.java}
 ...
   // MASTER is name of the webapp and the attribute name used stuffing this
   //instance into web context.
   public static final String MASTER = master;
 ...
   protected String getProcessName() {
 return MASTER;
   }
 ...
 {code}
 {code:title=HRegionServer.java}
 ...
   /** region server process name */
   public static final String REGIONSERVER = regionserver;
 ...
   protected String getProcessName() {
 return REGIONSERVER;
   }
 ...
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12328) Need to separate JvmMetrics for Master and RegionServer

2014-10-23 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12328:
--
   Resolution: Fixed
Fix Version/s: 0.99.2
   2.0.0
   Status: Resolved  (was: Patch Available)

Committed to branch-1.  Thank you for the patch [~yunsh]

 Need to separate JvmMetrics for Master and RegionServer
 ---

 Key: HBASE-12328
 URL: https://issues.apache.org/jira/browse/HBASE-12328
 Project: HBase
  Issue Type: Improvement
Reporter: Sanghyun Yun
Priority: Minor
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12328.2.patch, HBASE-12328.patch


 tag.ProcessName of JvmMetrics is IPC.
 It is same both Master and RegionServer.
 {code:title=HBase(Master and RegionServer)'s Metrics Dump}
 ...
 name: Hadoop:service=HBase,name=JvmMetrics,
 modelerType: JvmMetrics,
 tag.Context: jvm,
 tag.ProcessName: IPC,
 tag.SessionId: ,
 ...
 {code}
 When I use HBase with Ganglia,
 I wrote tagsForPrefix.jvm=ProcessName in hadoop-metrics2-hbase.properties.
 {code:title=hadoop-metrics2-hbase.properties}
 ...
 *.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
 hbase.sink.ganglia.tagsForPrefix.jvm=ProcessName
 ...
 {code}
 But, Ganglia generate only one RRD file because tag.ProcessName is IPC both 
 Master and Regionserver.
 I think it need to separate JvmMetrics for Master and RegionServer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11164) Document and test rolling updates from 0.98 - 1.0

2014-10-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14182445#comment-14182445
 ] 

Hudson commented on HBASE-11164:


FAILURE: Integrated in HBase-TRUNK #5694 (See 
[https://builds.apache.org/job/HBase-TRUNK/5694/])
HBASE-11164 Document and test rolling updates from 0.98 - 1.0; Add note on why 
can't go from 0.96 to 1.0 and define what rolling upgrade is (stack: rev 
eae0f202cec535bfa0fc8dff1401b7afbe11f33f)
* src/main/docbkx/upgrading.xml


 Document and test rolling updates from 0.98 - 1.0
 --

 Key: HBASE-11164
 URL: https://issues.apache.org/jira/browse/HBASE-11164
 Project: HBase
  Issue Type: Sub-task
Reporter: Enis Soztutar
Assignee: stack
Priority: Critical
 Fix For: 0.99.2


 I think 1.0 should be rolling upgradable from 0.98 unless we break it 
 intentionally for a specific reason. Unless there is such an issue, lets 
 document that 1.0 and 0.98 should be rolling upgrade compatible. 
 We should also test this before the 0.99 release. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12322) Add clean up command to ITBLL

2014-10-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14182446#comment-14182446
 ] 

Hudson commented on HBASE-12322:


FAILURE: Integrated in HBase-TRUNK #5694 (See 
[https://builds.apache.org/job/HBase-TRUNK/5694/])
HBASE-12322 Add Clean command to ITBLL (stack: rev 
11638a8cf294d57883928c91ab7d8d3d2eea6c7d)
* 
hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedList.java


 Add clean up command to ITBLL
 -

 Key: HBASE-12322
 URL: https://issues.apache.org/jira/browse/HBASE-12322
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12322.patch


 Right now ITBLL can leave a table and some files on HDFS. It's then up to the 
 user to clean them up. This can be a little messy. Lets give a single command 
 to do that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >