[jira] [Commented] (HBASE-9808) org.apache.hadoop.hbase.rest.PerformanceEvaluation is out of sync with org.apache.hadoop.hbase.PerformanceEvaluation

2013-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13815755#comment-13815755
 ] 

Hadoop QA commented on HBASE-9808:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12612526/HBASE-9808-v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 4 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7773//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7773//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7773//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7773//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7773//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7773//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7773//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7773//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7773//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7773//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7773//console

This message is automatically generated.

 org.apache.hadoop.hbase.rest.PerformanceEvaluation is out of sync with 
 org.apache.hadoop.hbase.PerformanceEvaluation
 

 Key: HBASE-9808
 URL: https://issues.apache.org/jira/browse/HBASE-9808
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Gustavo Anatoly
 Attachments: HBASE-9808-v1.patch, HBASE-9808-v2.patch, 
 HBASE-9808.patch


 Here is list of JIRAs whose fixes might have gone into 
 rest.PerformanceEvaluation :
 {code}
 
 r1527817 | mbertozzi | 2013-09-30 15:57:44 -0700 (Mon, 30 Sep 2013) | 1 line
 HBASE-9663 PerformanceEvaluation does not properly honor specified table name 
 parameter
 
 r1526452 | mbertozzi | 2013-09-26 04:58:50 -0700 (Thu, 26 Sep 2013) | 1 line
 HBASE-9662 PerformanceEvaluation input do not handle tags properties
 
 r1525269 | ramkrishna | 2013-09-21 11:01:32 -0700 (Sat, 21 Sep 2013) | 3 lines
 HBASE-8496 - Implement tags and the internals of how a tag should look like 
 (Ram)
 
 r1524985 | nkeywal | 2013-09-20 06:02:54 -0700 (Fri, 20 Sep 2013) | 1 line
 HBASE-9558  PerformanceEvaluation is in hbase-server, and creates a 
 dependency to MiniDFSCluster
 
 r1523782 | nkeywal | 2013-09-16 13:07:13 -0700 (Mon, 16 Sep 2013) | 1 line
 HBASE-9521  clean clearBufferOnFail behavior and deprecate it
 
 r1518341 | jdcryans | 2013-08-28 

[jira] [Created] (HBASE-9913) weblogic deployment project implementation under the mapreduce hbase reported a NullPointer error

2013-11-07 Thread JIRA
刘泓 created HBASE-9913:
-

 Summary: weblogic deployment project implementation under the 
mapreduce hbase reported a NullPointer error 
 Key: HBASE-9913
 URL: https://issues.apache.org/jira/browse/HBASE-9913
 Project: HBase
  Issue Type: Bug
  Components: hadoop2
Affects Versions: 0.94.10
 Environment: weblogic windows
Reporter: 刘泓


java.lang.NullPointerException
at java.io.File.init(File.java:222)
at java.util.zip.ZipFile.init(ZipFile.java:75)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
at 
com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
at 
com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at 
weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
at 
weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
at 
weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at 
weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
at 
weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
at 
weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
at 
weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9913) weblogic deployment project implementation under the mapreduce hbase reported a NullPointer error

2013-11-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

刘泓 updated HBASE-9913:
--

Fix Version/s: 0.94.10
   Status: Patch Available  (was: Open)

 weblogic deployment project implementation under the mapreduce hbase reported 
 a NullPointer error 
 --

 Key: HBASE-9913
 URL: https://issues.apache.org/jira/browse/HBASE-9913
 Project: HBase
  Issue Type: Bug
  Components: hadoop2
Affects Versions: 0.94.10
 Environment: weblogic windows
Reporter: 刘泓
  Labels: patch
 Fix For: 0.94.10


 java.lang.NullPointerException
   at java.io.File.init(File.java:222)
   at java.util.zip.ZipFile.init(ZipFile.java:75)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
   at 
 com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
   at 
 com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
   at 
 weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
   at 
 weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
   at 
 weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
   at 
 weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
   at 
 weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
   at 
 weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
   at 
 weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
   at 
 weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
   at 
 weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
   at 
 weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
   at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
   at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
  



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9913) weblogic deployment project implementation under the mapreduce hbase reported a NullPointer error

2013-11-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

刘泓 updated HBASE-9913:
--

Status: Open  (was: Patch Available)

 weblogic deployment project implementation under the mapreduce hbase reported 
 a NullPointer error 
 --

 Key: HBASE-9913
 URL: https://issues.apache.org/jira/browse/HBASE-9913
 Project: HBase
  Issue Type: Bug
  Components: hadoop2
Affects Versions: 0.94.10
 Environment: weblogic windows
Reporter: 刘泓
  Labels: patch
 Fix For: 0.94.10


 java.lang.NullPointerException
   at java.io.File.init(File.java:222)
   at java.util.zip.ZipFile.init(ZipFile.java:75)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
   at 
 com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
   at 
 com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
   at 
 weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
   at 
 weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
   at 
 weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
   at 
 weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
   at 
 weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
   at 
 weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
   at 
 weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
   at 
 weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
   at 
 weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
   at 
 weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
   at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
   at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
  



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9913) weblogic deployment project implementation under the mapreduce hbase reported a NullPointer error

2013-11-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

刘泓 updated HBASE-9913:
--

Description: 
java.lang.NullPointerException
at java.io.File.init(File.java:222)
at java.util.zip.ZipFile.init(ZipFile.java:75)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
at 
com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
at 
com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at 
weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
at 
weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
at 
weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at 
weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
at 
weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
at 
weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
at 
weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)

 

By tomcat and weblogic respectively breakpoint trace the source code hbase,we 
found TableMapReduceUtil.findOrCreateJar returned string is null.Under tomcat 
jar package getProtocol return to jar type, and return to the zip type under 
weblogic,so we join judgment zip type

  was:
java.lang.NullPointerException
at java.io.File.init(File.java:222)
at java.util.zip.ZipFile.init(ZipFile.java:75)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
at 
com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
at 
com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at 
weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
at 
weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
at 
weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at 
weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
at 
weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
at 

[jira] [Updated] (HBASE-9913) weblogic deployment project implementation under the mapreduce hbase reported a NullPointer error

2013-11-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

刘泓 updated HBASE-9913:
--

Status: Patch Available  (was: Open)

 weblogic deployment project implementation under the mapreduce hbase reported 
 a NullPointer error 
 --

 Key: HBASE-9913
 URL: https://issues.apache.org/jira/browse/HBASE-9913
 Project: HBase
  Issue Type: Bug
  Components: hadoop2
Affects Versions: 0.94.10
 Environment: weblogic windows
Reporter: 刘泓
  Labels: patch
 Fix For: 0.94.10

 Attachments: TableMapReduceUtil.class, TableMapReduceUtil.java


 java.lang.NullPointerException
   at java.io.File.init(File.java:222)
   at java.util.zip.ZipFile.init(ZipFile.java:75)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
   at 
 com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
   at 
 com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
   at 
 weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
   at 
 weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
   at 
 weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
   at 
 weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
   at 
 weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
   at 
 weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
   at 
 weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
   at 
 weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
   at 
 weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
   at 
 weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
   at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
   at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
  
 By tomcat and weblogic respectively breakpoint trace the source code hbase,we 
 found TableMapReduceUtil.findOrCreateJar returned string is null.Under tomcat 
 jar package getProtocol return to jar type, and return to the zip type under 
 weblogic,so we join judgment zip type



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9913) weblogic deployment project implementation under the mapreduce hbase reported a NullPointer error

2013-11-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

刘泓 updated HBASE-9913:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

 weblogic deployment project implementation under the mapreduce hbase reported 
 a NullPointer error 
 --

 Key: HBASE-9913
 URL: https://issues.apache.org/jira/browse/HBASE-9913
 Project: HBase
  Issue Type: Bug
  Components: hadoop2
Affects Versions: 0.94.10
 Environment: weblogic windows
Reporter: 刘泓
  Labels: patch
 Fix For: 0.94.10

 Attachments: TableMapReduceUtil.class, TableMapReduceUtil.java


 java.lang.NullPointerException
   at java.io.File.init(File.java:222)
   at java.util.zip.ZipFile.init(ZipFile.java:75)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
   at 
 com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
   at 
 com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
   at 
 weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
   at 
 weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
   at 
 weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
   at 
 weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
   at 
 weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
   at 
 weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
   at 
 weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
   at 
 weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
   at 
 weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
   at 
 weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
   at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
   at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
  
 By tomcat and weblogic respectively breakpoint trace the source code hbase,we 
 found TableMapReduceUtil.findOrCreateJar returned string is null.Under tomcat 
 jar package getProtocol return to jar type, and return to the zip type under 
 weblogic,so we join judgment zip type



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9913) weblogic deployment project implementation under the mapreduce hbase reported a NullPointer error

2013-11-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

刘泓 updated HBASE-9913:
--

Attachment: TableMapReduceUtil.class
TableMapReduceUtil.java

here is java and class file,so you should replace the class file in your 
hase-xx.jar

 weblogic deployment project implementation under the mapreduce hbase reported 
 a NullPointer error 
 --

 Key: HBASE-9913
 URL: https://issues.apache.org/jira/browse/HBASE-9913
 Project: HBase
  Issue Type: Bug
  Components: hadoop2
Affects Versions: 0.94.10
 Environment: weblogic windows
Reporter: 刘泓
  Labels: patch
 Fix For: 0.94.10

 Attachments: TableMapReduceUtil.class, TableMapReduceUtil.java


 java.lang.NullPointerException
   at java.io.File.init(File.java:222)
   at java.util.zip.ZipFile.init(ZipFile.java:75)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
   at 
 com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
   at 
 com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
   at 
 weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
   at 
 weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
   at 
 weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
   at 
 weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
   at 
 weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
   at 
 weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
   at 
 weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
   at 
 weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
   at 
 weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
   at 
 weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
   at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
   at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
  
 By tomcat and weblogic respectively breakpoint trace the source code hbase,we 
 found TableMapReduceUtil.findOrCreateJar returned string is null.Under tomcat 
 jar package getProtocol return to jar type, and return to the zip type under 
 weblogic,so we join judgment zip type



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9913) weblogic deployment project implementation under the mapreduce hbase reported a NullPointer error

2013-11-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

刘泓 updated HBASE-9913:
--

Description: 
java.lang.NullPointerException
at java.io.File.init(File.java:222)
at java.util.zip.ZipFile.init(ZipFile.java:75)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
at 
com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
at 
com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at 
weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
at 
weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
at 
weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at 
weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
at 
weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
at 
weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
at 
weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)

 

by respective  the hbase source code under weblogic,we found 
TableMapReduceUtil.findOrCreateJar returned string is null.because under tomcat 
jar package getProtocol return to jar type, and return to the zip type under 
weblogic,so we join judgment zip type

  was:
java.lang.NullPointerException
at java.io.File.init(File.java:222)
at java.util.zip.ZipFile.init(ZipFile.java:75)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
at 
com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
at 
com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at 
weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
at 
weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
at 
weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at 
weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
at 
weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
at 

[jira] [Updated] (HBASE-9913) weblogic deployment project implementation under the mapreduce hbase reported a NullPointer error

2013-11-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

刘泓 updated HBASE-9913:
--

Description: 
java.lang.NullPointerException
at java.io.File.init(File.java:222)
at java.util.zip.ZipFile.init(ZipFile.java:75)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
at 
com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
at 
com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at 
weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
at 
weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
at 
weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at 
weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
at 
weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
at 
weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
at 
weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)

 

by respective  the hbase source code under weblogic,we found 
TableMapReduceUtil.findOrCreateJar returned string is null.because under tomcat 
,jar file's protocol is jar type, but under weblogic ,jar file's protocol is 
zip type,and the findOrCreateJar method cann't resolve zip type, so we should 
join zip type

  was:
java.lang.NullPointerException
at java.io.File.init(File.java:222)
at java.util.zip.ZipFile.init(ZipFile.java:75)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
at 
com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
at 
com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at 
weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
at 
weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
at 
weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at 
weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
at 
weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
at 

[jira] [Updated] (HBASE-9913) weblogic deployment project implementation under the mapreduce hbase reported a NullPointer error

2013-11-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

刘泓 updated HBASE-9913:
--

Description: 
java.lang.NullPointerException
at java.io.File.init(File.java:222)
at java.util.zip.ZipFile.init(ZipFile.java:75)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
at 
com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
at 
com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at 
weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
at 
weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
at 
weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at 
weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
at 
weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
at 
weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
at 
weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)

 

by respective  the hbase source code under weblogic,we found 
TableMapReduceUtil.findOrCreateJar returned string is null.because under tomcat 
,jar file's protocol is jar type, but under weblogic ,jar file's protocol is 
zip type,and the findOrCreateJar method cann't resolve zip type, so we should 
join zip type judgement

  was:
java.lang.NullPointerException
at java.io.File.init(File.java:222)
at java.util.zip.ZipFile.init(ZipFile.java:75)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
at 
com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
at 
com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at 
weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
at 
weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
at 
weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at 
weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
at 
weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
at 

[jira] [Updated] (HBASE-9902) Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time

2013-11-07 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-9902:
--

Fix Version/s: (was: 0.96.0)
   0.96.1

 Region Server is starting normally even if clock skew is more than default 30 
 seconds(or any configured). - Regionserver node time is greater than master 
 node time
 

 Key: HBASE-9902
 URL: https://issues.apache.org/jira/browse/HBASE-9902
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.11
Reporter: Kashif J S
 Fix For: 0.98.0, 0.96.1

 Attachments: HBASE-9902.patch


 When Region server's time is ahead of Master's time and the difference is 
 more than hbase.master.maxclockskew value, region server startup is not 
 failing with ClockOutOfSyncException.
 This causes some abnormal behavior as detected by our Tests.
 ServerManager.java#checkClockSkew
   long skew = System.currentTimeMillis() - serverCurrentTime;
 if (skew  maxSkew) {
   String message = Server  + serverName +  has been  +
 rejected; Reported time is too far out of sync with master.   +
 Time difference of  + skew + ms  max allowed of  + maxSkew + 
 ms;
   LOG.warn(message);
   throw new ClockOutOfSyncException(message);
 }
 Above line results in negative value when Master's time is lesser than 
 region server time and   if (skew  maxSkew)  check fails to find the skew 
 in this case.
 Please Note: This was tested in hbase 0.94.11 version and the trunk also 
 currently has the same logic.
 The fix for the same would be to make the skew positive value first as below:
  long skew = System.currentTimeMillis() - serverCurrentTime;
 skew = (skew  0 ? -skew : skew);
 if (skew  maxSkew) {.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9902) Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time

2013-11-07 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-9902:
--

Release Note: Clock skew detection to be made absolute value comparison. 
Any time difference between master or region, high or low must prevent the 
region server startup
  Status: Patch Available  (was: Open)

Clock skew detection to be made absolute value comparison. Any time difference 
between master or region, high or low must prevent the region server startup

 Region Server is starting normally even if clock skew is more than default 30 
 seconds(or any configured). - Regionserver node time is greater than master 
 node time
 

 Key: HBASE-9902
 URL: https://issues.apache.org/jira/browse/HBASE-9902
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.11
Reporter: Kashif J S
 Fix For: 0.98.0, 0.96.1

 Attachments: HBASE-9902.patch


 When Region server's time is ahead of Master's time and the difference is 
 more than hbase.master.maxclockskew value, region server startup is not 
 failing with ClockOutOfSyncException.
 This causes some abnormal behavior as detected by our Tests.
 ServerManager.java#checkClockSkew
   long skew = System.currentTimeMillis() - serverCurrentTime;
 if (skew  maxSkew) {
   String message = Server  + serverName +  has been  +
 rejected; Reported time is too far out of sync with master.   +
 Time difference of  + skew + ms  max allowed of  + maxSkew + 
 ms;
   LOG.warn(message);
   throw new ClockOutOfSyncException(message);
 }
 Above line results in negative value when Master's time is lesser than 
 region server time and   if (skew  maxSkew)  check fails to find the skew 
 in this case.
 Please Note: This was tested in hbase 0.94.11 version and the trunk also 
 currently has the same logic.
 The fix for the same would be to make the skew positive value first as below:
  long skew = System.currentTimeMillis() - serverCurrentTime;
 skew = (skew  0 ? -skew : skew);
 if (skew  maxSkew) {.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9913) weblogic deployment project implementation under the mapreduce hbase reported a NullPointerException

2013-11-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

刘泓 updated HBASE-9913:
--

Summary: weblogic deployment project implementation under the mapreduce 
hbase reported a NullPointerException  (was: weblogic deployment project 
implementation under the mapreduce hbase reported a NullPointer error )

 weblogic deployment project implementation under the mapreduce hbase reported 
 a NullPointerException
 

 Key: HBASE-9913
 URL: https://issues.apache.org/jira/browse/HBASE-9913
 Project: HBase
  Issue Type: Bug
  Components: hadoop2
Affects Versions: 0.94.10
 Environment: weblogic windows
Reporter: 刘泓
  Labels: patch
 Fix For: 0.94.10

 Attachments: TableMapReduceUtil.class, TableMapReduceUtil.java


 java.lang.NullPointerException
   at java.io.File.init(File.java:222)
   at java.util.zip.ZipFile.init(ZipFile.java:75)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
   at 
 com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
   at 
 com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
   at 
 weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
   at 
 weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
   at 
 weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
   at 
 weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
   at 
 weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
   at 
 weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
   at 
 weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
   at 
 weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
   at 
 weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
   at 
 weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
   at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
   at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
  
 by respective  the hbase source code under weblogic,we found 
 TableMapReduceUtil.findOrCreateJar returned string is null.because under 
 tomcat ,jar file's protocol is jar type, but under weblogic ,jar file's 
 protocol is zip type,and the findOrCreateJar method cann't resolve zip type, 
 so we should join zip type judgement



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9775) Client write path perf issues

2013-11-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13815768#comment-13815768
 ] 

stack commented on HBASE-9775:
--

Looking at YCSB, it makes a static HBaseConfiguration in the head of 
HBaseClient.  Each 'client' then does new HTable(conf, name).  Internally this 
will do a getConnection so all threads/clients will use the same underlying rpc 
connection/HConnection instance.  Let me try amending the YCSB HBaseClient so 
each has its own connection to see if that ups the amount a single 
client/YCSB-thread can drive.

Each HTable has an AsyncProcess.  An AsyncProcess can have 100 tasks 
outstanding at a time; an arbitrary number.  There is an upper bound of 256 
threads per connection IFF the connection was let create its own pool. In YCSB 
hbase client, the HTable creates the pool and it is an unbounded pool contained 
only by the 100 outstanding tasks AsyncProcess so I'd guess that when Elliott 
was running, he had ...32*77 threads (~2400) rather than an upper bound of 
256.  To be checked.


 Client write path perf issues
 -

 Key: HBASE-9775
 URL: https://issues.apache.org/jira/browse/HBASE-9775
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.96.0
Reporter: Elliott Clark
Priority: Critical
 Attachments: 9775.rig.txt, 9775.rig.v2.patch, 9775.rig.v3.patch, 
 Charts Search   Cloudera Manager - ITBLL.png, Charts Search   Cloudera 
 Manager.png, hbase-9775.patch, job_run.log, short_ycsb.png, ycsb.png, 
 ycsb_insert_94_vs_96.png


 Testing on larger clusters has not had the desired throughput increases.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-8741) Scope sequenceid to the region rather than regionserver (WAS: Mutations on Regions in recovery mode might have same sequenceIDs)

2013-11-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13815771#comment-13815771
 ] 

stack commented on HBASE-8741:
--

I'll commit tomorrow unless someone else wants to review.

 Scope sequenceid to the region rather than regionserver (WAS: Mutations on 
 Regions in recovery mode might have same sequenceIDs)
 

 Key: HBASE-8741
 URL: https://issues.apache.org/jira/browse/HBASE-8741
 Project: HBase
  Issue Type: Bug
  Components: MTTR
Affects Versions: 0.95.1
Reporter: Himanshu Vashishtha
Assignee: Himanshu Vashishtha
 Fix For: 0.98.0

 Attachments: HBASE-8741-trunk-v6.1-rebased.patch, 
 HBASE-8741-trunk-v6.2.1.patch, HBASE-8741-trunk-v6.2.2.patch, 
 HBASE-8741-trunk-v6.2.2.patch, HBASE-8741-trunk-v6.3.patch, 
 HBASE-8741-trunk-v6.4.patch, HBASE-8741-trunk-v6.patch, HBASE-8741-v0.patch, 
 HBASE-8741-v2.patch, HBASE-8741-v3.patch, HBASE-8741-v4-again.patch, 
 HBASE-8741-v4-again.patch, HBASE-8741-v4.patch, HBASE-8741-v5-again.patch, 
 HBASE-8741-v5.patch


 Currently, when opening a region, we find the maximum sequence ID from all 
 its HFiles and then set the LogSequenceId of the log (in case the later is at 
 a small value). This works good in recovered.edits case as we are not writing 
 to the region until we have replayed all of its previous edits. 
 With distributed log replay, if we want to enable writes while a region is 
 under recovery, we need to make sure that the logSequenceId  maximum 
 logSequenceId of the old regionserver. Otherwise, we might have a situation 
 where new edits have same (or smaller) sequenceIds. 
 We can store region level information in the WALTrailer, than this scenario 
 could be avoided by:
 a) reading the trailer of the last completed file, i.e., last wal file 
 which has a trailer and,
 b) completely reading the last wal file (this file would not have the 
 trailer, so it needs to be read completely).
 In future, if we switch to multi wal file, we could read the trailer for all 
 completed WAL files, and reading the remaining incomplete files.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9775) Client write path perf issues

2013-11-07 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13815782#comment-13815782
 ] 

Nicolas Liochon commented on HBASE-9775:


bq.  To be checked.
I confirm. That's by design (in YCSB, and in a way, in HBase as well, 0.94 
included...). It does not really match a real use case, an application server 
would have a single connection.

32 clients on a single machine takes some CPU (whatever the number of thread), 
I guess with all the changes we're likely better, but we could be CPU bound.

A test with a single YCSB client on each machine would be interesting imho.

bq. Ideally we want something like what you had before – 5 or 1/2 the CPUs on 
the local server as guesstimate of how many CPUs the server has, which ever is 
greater
IIRC, I forgot to change hbase-site.xml, so the default has not changed at the 
end :-).
But it's a per AsyncProcess limit. With the 32 clients, a server can have 64 
writes  queries in parallel. That's already a lot. (with 0.94, it would have 
been 32 max, and much likely half of less of it, as we wait for all the queries 
to be finished before sending a new set of queries). 

 Client write path perf issues
 -

 Key: HBASE-9775
 URL: https://issues.apache.org/jira/browse/HBASE-9775
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.96.0
Reporter: Elliott Clark
Priority: Critical
 Attachments: 9775.rig.txt, 9775.rig.v2.patch, 9775.rig.v3.patch, 
 Charts Search   Cloudera Manager - ITBLL.png, Charts Search   Cloudera 
 Manager.png, hbase-9775.patch, job_run.log, short_ycsb.png, ycsb.png, 
 ycsb_insert_94_vs_96.png


 Testing on larger clusters has not had the desired throughput increases.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9905) Enable using seqId as timestamp

2013-11-07 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13815783#comment-13815783
 ] 

Nicolas Liochon commented on HBASE-9905:


When doing queries with a timestamps, you may/will want the timestamps to be 
'consistent' between the different tables across the machines.

 Enable using seqId as timestamp 
 

 Key: HBASE-9905
 URL: https://issues.apache.org/jira/browse/HBASE-9905
 Project: HBase
  Issue Type: New Feature
Reporter: Enis Soztutar
 Fix For: 0.98.0


 This has been discussed previously, and Lars H. was mentioning an idea from 
 the client to declare whether timestamps are used or not explicitly. 
 The problem is that, for data models not using timestamps, we are still 
 relying on clocks to order the updates. Clock skew, same milisecond puts 
 after deletes, etc can cause unexpected behavior and data not being visible.  
 We should have a table descriptor / family property, which would declare that 
 the data model does not use timestamps. Then we can populate this dimension 
 with the seqId, so that global ordering of edits are not effected by wall 
 clock. 
 For example, META will use this. 
 Once we have something like this, we can think of making it default for new 
 tables, so that the unknowing user will not shoot herself in the foot. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9902) Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time

2013-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13815787#comment-13815787
 ] 

Hadoop QA commented on HBASE-9902:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12612543/HBASE-9902.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 4 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.master.TestClockSkewDetection

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7776//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7776//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7776//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7776//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7776//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7776//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7776//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7776//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7776//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7776//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7776//console

This message is automatically generated.

 Region Server is starting normally even if clock skew is more than default 30 
 seconds(or any configured). - Regionserver node time is greater than master 
 node time
 

 Key: HBASE-9902
 URL: https://issues.apache.org/jira/browse/HBASE-9902
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.11
Reporter: Kashif J S
 Fix For: 0.98.0, 0.96.1

 Attachments: HBASE-9902.patch


 When Region server's time is ahead of Master's time and the difference is 
 more than hbase.master.maxclockskew value, region server startup is not 
 failing with ClockOutOfSyncException.
 This causes some abnormal behavior as detected by our Tests.
 ServerManager.java#checkClockSkew
   long skew = System.currentTimeMillis() - serverCurrentTime;
 if (skew  maxSkew) {
   String message = Server  + serverName +  has been  +
 rejected; Reported time is too far out of sync with master.   +
 Time difference of  + skew + ms  max allowed of  + maxSkew + 
 ms;
   LOG.warn(message);
   throw new ClockOutOfSyncException(message);
 }
 Above line results in negative value when Master's time is lesser than 
 region server time and   if (skew  maxSkew)  check fails to find the skew 
 in this case.
 Please Note: This was tested in hbase 0.94.11 version and the trunk also 
 currently has the same logic.
 The fix for the same would be to make the skew positive value first as below:
  long skew = System.currentTimeMillis() - serverCurrentTime;
 skew = (skew  0 ? -skew : skew);
 if (skew  

[jira] [Updated] (HBASE-9902) Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time

2013-11-07 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9902:
---

Assignee: Kashif J S

 Region Server is starting normally even if clock skew is more than default 30 
 seconds(or any configured). - Regionserver node time is greater than master 
 node time
 

 Key: HBASE-9902
 URL: https://issues.apache.org/jira/browse/HBASE-9902
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.11
Reporter: Kashif J S
Assignee: Kashif J S
 Fix For: 0.98.0, 0.96.1

 Attachments: HBASE-9902.patch


 When Region server's time is ahead of Master's time and the difference is 
 more than hbase.master.maxclockskew value, region server startup is not 
 failing with ClockOutOfSyncException.
 This causes some abnormal behavior as detected by our Tests.
 ServerManager.java#checkClockSkew
   long skew = System.currentTimeMillis() - serverCurrentTime;
 if (skew  maxSkew) {
   String message = Server  + serverName +  has been  +
 rejected; Reported time is too far out of sync with master.   +
 Time difference of  + skew + ms  max allowed of  + maxSkew + 
 ms;
   LOG.warn(message);
   throw new ClockOutOfSyncException(message);
 }
 Above line results in negative value when Master's time is lesser than 
 region server time and   if (skew  maxSkew)  check fails to find the skew 
 in this case.
 Please Note: This was tested in hbase 0.94.11 version and the trunk also 
 currently has the same logic.
 The fix for the same would be to make the skew positive value first as below:
  long skew = System.currentTimeMillis() - serverCurrentTime;
 skew = (skew  0 ? -skew : skew);
 if (skew  maxSkew) {.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9001) TestThriftServerCmdLine.testRunThriftServer[0] failed

2013-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13815809#comment-13815809
 ] 

Hudson commented on HBASE-9001:
---

SUCCESS: Integrated in hbase-0.96-hadoop2 #114 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/114/])
HBASE-9001 Add a toString in HTable, fix a log in AssignmentManager (nkeywal: 
rev 1539426)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java


 TestThriftServerCmdLine.testRunThriftServer[0] failed
 -

 Key: HBASE-9001
 URL: https://issues.apache.org/jira/browse/HBASE-9001
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: stack
Assignee: stack
 Fix For: 0.95.2

 Attachments: 9001.txt


 https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/624/testReport/junit/org.apache.hadoop.hbase.thrift/TestThriftServerCmdLine/testRunThriftServer_0_/
 It seems stuck here:
 {code}
 2013-07-19 03:52:03,158 INFO  [Thread-131] 
 thrift.TestThriftServerCmdLine(132): Starting HBase Thrift server with 
 command line: -hsha -port 56708 start
 2013-07-19 03:52:03,174 INFO  [ThriftServer-cmdline] 
 thrift.ThriftServerRunner$ImplType(208): Using thrift server type hsha
 2013-07-19 03:52:03,205 WARN  [ThriftServer-cmdline] conf.Configuration(817): 
 fs.default.name is deprecated. Instead, use fs.defaultFS
 2013-07-19 03:52:03,206 WARN  [ThriftServer-cmdline] conf.Configuration(817): 
 mapreduce.job.counters.limit is deprecated. Instead, use 
 mapreduce.job.counters.max
 2013-07-19 03:52:03,207 WARN  [ThriftServer-cmdline] conf.Configuration(817): 
 io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
 2013-07-19 03:54:03,156 INFO  [pool-1-thread-1] hbase.ResourceChecker(171): 
 after: thrift.TestThriftServerCmdLine#testRunThriftServer[0] Thread=146 (was 
 155), OpenFileDescriptor=295 (was 311), MaxFileDescriptor=4096 (was 4096), 
 SystemLoadAverage=293 (was 240) - SystemLoadAverage LEAK? -, ProcessCount=145 
 (was 143) - ProcessCount LEAK? -, AvailableMemoryMB=779 (was 1263), 
 ConnectionCount=4 (was 4)
 2013-07-19 03:54:03,157 DEBUG [pool-1-thread-1] 
 thrift.TestThriftServerCmdLine(107): implType=-hsha, specifyFramed=false, 
 specifyBindIP=false, specifyCompact=true
 {code}
 My guess is that we didn't get scheduled because load was almost 300 on this 
 box at the time?
 Let me up the timeout of two minutes.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9885) Avoid some Result creation in protobuf conversions

2013-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13815808#comment-13815808
 ] 

Hudson commented on HBASE-9885:
---

SUCCESS: Integrated in hbase-0.96-hadoop2 #114 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/114/])
HBASE-9885 Avoid some Result creation in protobuf conversions - REVERT to check 
the cause of precommit flakiness (nkeywal: rev 1539493)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java
HBASE-9885 Avoid some Result creation in protobuf conversions (nkeywal: rev 
1539427)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java


 Avoid some Result creation in protobuf conversions
 --

 Key: HBASE-9885
 URL: https://issues.apache.org/jira/browse/HBASE-9885
 Project: HBase
  Issue Type: Bug
  Components: Client, Protobufs, regionserver
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.96.1

 Attachments: 9885.v1.patch, 9885.v2, 9885.v2.patch, 9885.v3.patch, 
 9885.v3.patch


 We creates a lot of Result that we could avoid, as they contain nothing else 
 than a boolean value. We create sometimes a protobuf builder as well on this 
 path, this can be avoided.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9885) Avoid some Result creation in protobuf conversions

2013-11-07 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9885:
---

Attachment: 9885.v4.patch

 Avoid some Result creation in protobuf conversions
 --

 Key: HBASE-9885
 URL: https://issues.apache.org/jira/browse/HBASE-9885
 Project: HBase
  Issue Type: Bug
  Components: Client, Protobufs, regionserver
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.96.1

 Attachments: 9885.v1.patch, 9885.v2, 9885.v2.patch, 9885.v3.patch, 
 9885.v3.patch, 9885.v4.patch


 We creates a lot of Result that we could avoid, as they contain nothing else 
 than a boolean value. We create sometimes a protobuf builder as well on this 
 path, this can be avoided.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9885) Avoid some Result creation in protobuf conversions

2013-11-07 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9885:
---

Status: Patch Available  (was: Reopened)

 Avoid some Result creation in protobuf conversions
 --

 Key: HBASE-9885
 URL: https://issues.apache.org/jira/browse/HBASE-9885
 Project: HBase
  Issue Type: Bug
  Components: Client, Protobufs, regionserver
Affects Versions: 0.96.0, 0.98.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.96.1

 Attachments: 9885.v1.patch, 9885.v2, 9885.v2.patch, 9885.v3.patch, 
 9885.v3.patch, 9885.v4.patch


 We creates a lot of Result that we could avoid, as they contain nothing else 
 than a boolean value. We create sometimes a protobuf builder as well on this 
 path, this can be avoided.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9903) Remove the jamon generated classes from the findbugs analysis

2013-11-07 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9903:
---

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

 Remove the jamon generated classes from the findbugs analysis
 -

 Key: HBASE-9903
 URL: https://issues.apache.org/jira/browse/HBASE-9903
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0

 Attachments: 9903.v1.patch, 9903.v2.patch, 9903.v2.patch


 The current filter does not work.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9913) weblogic deployment project implementation under the mapreduce hbase reported a NullPointerException

2013-11-07 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13815866#comment-13815866
 ] 

Jean-Marc Spaggiari commented on HBASE-9913:


Hi,

I'm not really sure to get what the issue is. Can you please provide more 
details, like the version you are using for HBase and Hadoop, what you are 
trying to do, etc.? And for the file provided, can you please provide a diff 
instead?

You can see how to do that here: 
http://hbase.apache.org/book/submitting.patches.html

Thanks.

 weblogic deployment project implementation under the mapreduce hbase reported 
 a NullPointerException
 

 Key: HBASE-9913
 URL: https://issues.apache.org/jira/browse/HBASE-9913
 Project: HBase
  Issue Type: Bug
  Components: hadoop2
Affects Versions: 0.94.10
 Environment: weblogic windows
Reporter: 刘泓
  Labels: patch
 Fix For: 0.94.10

 Attachments: TableMapReduceUtil.class, TableMapReduceUtil.java


 java.lang.NullPointerException
   at java.io.File.init(File.java:222)
   at java.util.zip.ZipFile.init(ZipFile.java:75)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
   at 
 com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
   at 
 com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
   at 
 weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
   at 
 weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
   at 
 weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
   at 
 weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
   at 
 weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
   at 
 weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
   at 
 weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
   at 
 weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
   at 
 weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
   at 
 weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
   at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
   at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
  
 by respective  the hbase source code under weblogic,we found 
 TableMapReduceUtil.findOrCreateJar returned string is null.because under 
 tomcat ,jar file's protocol is jar type, but under weblogic ,jar file's 
 protocol is zip type,and the findOrCreateJar method cann't resolve zip type, 
 so we should join zip type judgement



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9775) Client write path perf issues

2013-11-07 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13815864#comment-13815864
 ] 

Jean-Marc Spaggiari commented on HBASE-9775:


Anyway I'm not able to apply this patch on 0.96:
{code}
jmspaggi@hbasetest1:~/hbase/hbase-0.96-9775$ patch -p0  9775.rig.v3.patch 
patching file 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
patching file 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
Reversed (or previously applied) patch detected!  Assume -R? [n] 
Apply anyway? [n] y
Hunk #1 FAILED at 60.
Hunk #2 succeeded at 288 (offset 1 line).
Hunk #3 FAILED at 1025.
2 out of 3 hunks FAILED -- saving rejects to file 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java.rej
patching file 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/MultiServerCallable.java
patching file 
hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
patching file 
hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java
patching file 
hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestClientNoCluster.java
patching file 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
Reversed (or previously applied) patch detected!  Assume -R? [n] 
Apply anyway? [n] y
Hunk #1 FAILED at 1632.
1 out of 1 hunk FAILED -- saving rejects to file 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java.rej
patching file 
hbase-server/src/main/java/org/apache/hadoop/hbase/tool/Canary.java
Reversed (or previously applied) patch detected!  Assume -R? [n] 
Apply anyway? [n] y
Hunk #1 FAILED at 34.
Hunk #2 FAILED at 616.
2 out of 2 hunks FAILED -- saving rejects to file 
hbase-server/src/main/java/org/apache/hadoop/hbase/tool/Canary.java.rej
{code}

I will run on 0.94 instead of 0.96+patch, but I don't think we can really 
compare the 2. The default parameters are very different, so just because of 
the config the results will be different too.

 Client write path perf issues
 -

 Key: HBASE-9775
 URL: https://issues.apache.org/jira/browse/HBASE-9775
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.96.0
Reporter: Elliott Clark
Priority: Critical
 Attachments: 9775.rig.txt, 9775.rig.v2.patch, 9775.rig.v3.patch, 
 Charts Search   Cloudera Manager - ITBLL.png, Charts Search   Cloudera 
 Manager.png, hbase-9775.patch, job_run.log, short_ycsb.png, ycsb.png, 
 ycsb_insert_94_vs_96.png


 Testing on larger clusters has not had the desired throughput increases.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-7403) Online Merge

2013-11-07 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13815867#comment-13815867
 ] 

Jean-Marc Spaggiari commented on HBASE-7403:


Just to provide more details ;)

If you look at the Fix version/s field, you will see  0.95.0.  0.95.0 is the 
dev version of 0.96.0. Everything with has been fixed in 0.95.0 is there in 
0.96.0.

 Online Merge
 

 Key: HBASE-7403
 URL: https://issues.apache.org/jira/browse/HBASE-7403
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.95.0
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Fix For: 0.98.0, 0.95.0

 Attachments: 7403-trunkv5.patch, 7403-trunkv6.patch, 7403-v5.txt, 
 7403v5.diff, 7403v5.txt, hbase-7403-0.95.patch, hbase-7403-94v1.patch, 
 hbase-7403-trunkv1.patch, hbase-7403-trunkv10.patch, 
 hbase-7403-trunkv11.patch, hbase-7403-trunkv12.patch, 
 hbase-7403-trunkv13.patch, hbase-7403-trunkv14.patch, 
 hbase-7403-trunkv15.patch, hbase-7403-trunkv16.patch, 
 hbase-7403-trunkv19.patch, hbase-7403-trunkv20.patch, 
 hbase-7403-trunkv22.patch, hbase-7403-trunkv23.patch, 
 hbase-7403-trunkv24.patch, hbase-7403-trunkv26.patch, 
 hbase-7403-trunkv28.patch, hbase-7403-trunkv29.patch, 
 hbase-7403-trunkv30.patch, hbase-7403-trunkv31.patch, 
 hbase-7403-trunkv32.patch, hbase-7403-trunkv33.patch, 
 hbase-7403-trunkv5.patch, hbase-7403-trunkv6.patch, hbase-7403-trunkv7.patch, 
 hbase-7403-trunkv8.patch, hbase-7403-trunkv9.patch, merge region.pdf


 Support executing region merge transaction on Regionserver, similar with 
 split transaction
 Process of merging two regions:
 a.client sends RPC (dispatch merging regions) to master
 b.master moves the regions together (on the same regionserver where the more 
 heavily loaded region resided)
 c.master sends RPC (merge regions) to this regionserver
 d.Regionserver executes the region merge transaction in the thread pool
 e.the above b,c,d run asynchronously
 Process of region merge transaction:
 a.Construct a new region merge transaction.
 b.prepare for the merge transaction, the transaction will be canceled if it 
 is unavailable, 
 e.g. two regions don't belong to same table; two regions are not adjacent in 
 a non-compulsory merge; region is closed or has reference
 c.execute the transaction as the following:
 /**
  * Set region as in transition, set it into MERGING state.
  */
 SET_MERGING_IN_ZK,
 /**
  * We created the temporary merge data directory.
  */
 CREATED_MERGE_DIR,
 /**
  * Closed the merging region A.
  */
 CLOSED_REGION_A,
 /**
  * The merging region A has been taken out of the server's online regions 
 list.
  */
 OFFLINED_REGION_A,
 /**
  * Closed the merging region B.
  */
 CLOSED_REGION_B,
 /**
  * The merging region B has been taken out of the server's online regions 
 list.
  */
 OFFLINED_REGION_B,
 /**
  * Started in on creation of the merged region.
  */
 STARTED_MERGED_REGION_CREATION,
 /**
  * Point of no return. If we got here, then transaction is not recoverable
  * other than by crashing out the regionserver.
  */
 PONR
 d.roll back if step c throws exception
 Usage:
 HBaseAdmin#mergeRegions
 See more details from the patch



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9775) Client write path perf issues

2013-11-07 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13815875#comment-13815875
 ] 

Nicolas Liochon commented on HBASE-9775:


The patch is not important, this jira is a kind of umbrella, the real meat has 
been done on other patches, already committed (or not committable, which leads 
to the same point: there is no patch to apply :-) )
What we're testing here is the baseline, like the graphs you were doing a while 
ago: how .96 behaves compared to .94 with the default settings in both cases. 

Thanks, Jean-Marc.




 Client write path perf issues
 -

 Key: HBASE-9775
 URL: https://issues.apache.org/jira/browse/HBASE-9775
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.96.0
Reporter: Elliott Clark
Priority: Critical
 Attachments: 9775.rig.txt, 9775.rig.v2.patch, 9775.rig.v3.patch, 
 Charts Search   Cloudera Manager - ITBLL.png, Charts Search   Cloudera 
 Manager.png, hbase-9775.patch, job_run.log, short_ycsb.png, ycsb.png, 
 ycsb_insert_94_vs_96.png


 Testing on larger clusters has not had the desired throughput increases.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9775) Client write path perf issues

2013-11-07 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13815915#comment-13815915
 ] 

Jean-Marc Spaggiari commented on HBASE-9775:


Then it's perfect. 0.96 is running since yesterday, and I have 0.94 ready in 
the pipe. Seems to take more than 12h per run, so results might come tomorrow 
only.

Also, it's running in standalone, but I have 6 drives in this server (1 SSD + 5 
SATA) so I can run it again in pseudo dist over the week-emd if we want...

 Client write path perf issues
 -

 Key: HBASE-9775
 URL: https://issues.apache.org/jira/browse/HBASE-9775
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.96.0
Reporter: Elliott Clark
Priority: Critical
 Attachments: 9775.rig.txt, 9775.rig.v2.patch, 9775.rig.v3.patch, 
 Charts Search   Cloudera Manager - ITBLL.png, Charts Search   Cloudera 
 Manager.png, hbase-9775.patch, job_run.log, short_ycsb.png, ycsb.png, 
 ycsb_insert_94_vs_96.png


 Testing on larger clusters has not had the desired throughput increases.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9879) Can't undelete a KeyValue

2013-11-07 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13815945#comment-13815945
 ] 

Jean-Marc Spaggiari commented on HBASE-9879:


Sorry, I was busy last few days and did not got a chance to update here.

So here is the concern about this.

Let's imagine your have a row with an ID, describing a unique identifier for 
your use case. Then you have one single CF, and columns. Columns are event 
categories. So each category is a type of event for this ID. Now, my for my 
events I have a payload in the cell. My payload is an avro object, where I have 
details about this specific occurrence of the event. Into this payload I have a 
field. Open, or close.

It's stored that way because I want to be able to see the even chronology using 
the versions, and I want to be able to jump to a specific event type.

I store my events, opened. And now, I want to close one of them. So I want to 
insert the exact same row with the exact same timestamp, but a slightly 
modifier payload where the status is now closed.

I will have 2 timestamps for this newly introduce cell. I will have the 
original timestamp, the timestamp when the cell has been initially written into 
HBase, and this timestamp will help for the ordering of the cells into a 
specific event type. BUT... I also want to have a timestamp for the update 
itself. And this last one to be used to order the same event updates.

If those modifications are coming fast, they might both be in memory. And I 
don't think we have any guarantee of the order. If were is some time between 
the 2, the first one might be already into a store, and the 2nd one into 
another store, so we should be fine. But not always. So we can't not really use 
that.

Same for the delete. If I send a put then a delete then a put like 
detailed by Benoit initially, then the last put will be hidden by the delete, 
except if there has been a compaction just before it and the delete and first 
put got flushed. 

Now, if we use a cell tag sequence ID, or a cell tag operation timestamp... If 
you send a put, then a delete, then a put... All with the same cell time 
stamp. The operation timestamp will not be the same and we will be able to 
order those 3 events based on the operation timestamp, and then we will be able 
to clear the first put and the delete but keep the last put. Same for the case 
were we have put then put. Again with the same cell stimestamp. We will be 
able to order those 2 operations based in the operation timestamp.

We can make this optional for a table. Like an parameter like 
CONSIDER_OPERATION_TIMESTAMP, which mean cells are going to be ordered also 
by this extra parameter and information is going to be stored into a tag. Same 
as the option you want to introduce where we block the feature to specify a 
specific cell timestamp, then we have another option to improve its usage 
depending on the use case.

My 2¢ ;)

 Can't undelete a KeyValue
 -

 Key: HBASE-9879
 URL: https://issues.apache.org/jira/browse/HBASE-9879
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: Benoit Sigoure

 Test scenario:
 put(KV, timestamp=100)
 put(KV, timestamp=200)
 delete(KV, timestamp=200, with MutationProto.DeleteType.DELETE_ONE_VERSION)
 get(KV) = returns value at timestamp=100 (OK)
 put(KV, timestamp=200)
 get(KV) = returns value at timestamp=100 (but not the one at timestamp=200 
 that was reborn by the previous put)
 Is that normal?
 I ran into this bug while running the integration tests at 
 https://github.com/OpenTSDB/asynchbase/pull/60 – the first time you run it, 
 it passes, but after that, it keeps failing.  Sorry I don't have the 
 corresponding HTable-based code but that should be fairly easy to write.
 I only tested this with 0.96.0, dunno yet how this behaved in prior releases.
 My hunch is that the tombstone added by the DELETE_ONE_VERSION keeps 
 shadowing the value even after it's reborn.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9905) Enable using seqId as timestamp

2013-11-07 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13815951#comment-13815951
 ] 

Jean-Marc Spaggiari commented on HBASE-9905:


Is that almost the same thing as what we are discussing here: HBASE-9879 ?

Also, I disagree with deprecating the current mode_mixed. Someone might want to 
let HBase decide on the timestamp but might want to be able to take action on a 
specific cell in the table based on it's timestamp for later updates.

 Enable using seqId as timestamp 
 

 Key: HBASE-9905
 URL: https://issues.apache.org/jira/browse/HBASE-9905
 Project: HBase
  Issue Type: New Feature
Reporter: Enis Soztutar
 Fix For: 0.98.0


 This has been discussed previously, and Lars H. was mentioning an idea from 
 the client to declare whether timestamps are used or not explicitly. 
 The problem is that, for data models not using timestamps, we are still 
 relying on clocks to order the updates. Clock skew, same milisecond puts 
 after deletes, etc can cause unexpected behavior and data not being visible.  
 We should have a table descriptor / family property, which would declare that 
 the data model does not use timestamps. Then we can populate this dimension 
 with the seqId, so that global ordering of edits are not effected by wall 
 clock. 
 For example, META will use this. 
 Once we have something like this, we can think of making it default for new 
 tables, so that the unknowing user will not shoot herself in the foot. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9885) Avoid some Result creation in protobuf conversions

2013-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13815969#comment-13815969
 ] 

Hadoop QA commented on HBASE-9885:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12612569/9885.v4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build///testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build///artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build///artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build///artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build///artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build///artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build///artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build///artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build///artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build///artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build///console

This message is automatically generated.

 Avoid some Result creation in protobuf conversions
 --

 Key: HBASE-9885
 URL: https://issues.apache.org/jira/browse/HBASE-9885
 Project: HBase
  Issue Type: Bug
  Components: Client, Protobufs, regionserver
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.96.1

 Attachments: 9885.v1.patch, 9885.v2, 9885.v2.patch, 9885.v3.patch, 
 9885.v3.patch, 9885.v4.patch


 We creates a lot of Result that we could avoid, as they contain nothing else 
 than a boolean value. We create sometimes a protobuf builder as well on this 
 path, this can be avoided.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HBASE-9914) Port fix for HBASE-9836 'Intermittent TestRegionObserverScannerOpenHook#testRegionObserverCompactionTimeStacking failure' to 0.94

2013-11-07 Thread Ted Yu (JIRA)
Ted Yu created HBASE-9914:
-

 Summary: Port fix for HBASE-9836 'Intermittent 
TestRegionObserverScannerOpenHook#testRegionObserverCompactionTimeStacking 
failure' to 0.94
 Key: HBASE-9914
 URL: https://issues.apache.org/jira/browse/HBASE-9914
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu


According to this thread: http://search-hadoop.com/m/3CzC31BQsDd , 
TestRegionObserverScannerOpenHook#testRegionObserverCompactionTimeStacking 
sometimes failed.

This issue is to port the fix from HBASE-9836 to 0.94



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9775) Client write path perf issues

2013-11-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816067#comment-13816067
 ] 

stack commented on HBASE-9775:
--

bq. I confirm. That's by design (in YCSB, and in a way, in HBase as well, 0.94 
included...). It does not really match a real use case, an application server 
would have a single connection.

It is a little odd.  We are 'constraining' all to go via the one connection.  
Even if you up the threads because you want more throughput, we will constrain 
you to run over the one connection.  Let me try and see if connection per 
thread makes a difference.

bq. IIRC, I forgot to change hbase-site.xml, so the default has not changed at 
the end .

You fixing boss?  Apparently [~eclark] made the change in this test anyways?



 Client write path perf issues
 -

 Key: HBASE-9775
 URL: https://issues.apache.org/jira/browse/HBASE-9775
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.96.0
Reporter: Elliott Clark
Priority: Critical
 Attachments: 9775.rig.txt, 9775.rig.v2.patch, 9775.rig.v3.patch, 
 Charts Search   Cloudera Manager - ITBLL.png, Charts Search   Cloudera 
 Manager.png, hbase-9775.patch, job_run.log, short_ycsb.png, ycsb.png, 
 ycsb_insert_94_vs_96.png


 Testing on larger clusters has not had the desired throughput increases.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9885) Avoid some Result creation in protobuf conversions

2013-11-07 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816073#comment-13816073
 ] 

Nicolas Liochon commented on HBASE-9885:


Committed v4, the issue in the built became visible only on other builds after 
commit... I resolve the jira if it seems fine after a few trials..

 Avoid some Result creation in protobuf conversions
 --

 Key: HBASE-9885
 URL: https://issues.apache.org/jira/browse/HBASE-9885
 Project: HBase
  Issue Type: Bug
  Components: Client, Protobufs, regionserver
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.96.1

 Attachments: 9885.v1.patch, 9885.v2, 9885.v2.patch, 9885.v3.patch, 
 9885.v3.patch, 9885.v4.patch


 We creates a lot of Result that we could avoid, as they contain nothing else 
 than a boolean value. We create sometimes a protobuf builder as well on this 
 path, this can be avoided.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9775) Client write path perf issues

2013-11-07 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816075#comment-13816075
 ] 

Nicolas Liochon commented on HBASE-9775:


bq. You fixing boss? Apparently Elliott Clark made the change in this test 
anyways?
I plan to do a jira once we're clear on the values we want, client pause  
back-off included...


 Client write path perf issues
 -

 Key: HBASE-9775
 URL: https://issues.apache.org/jira/browse/HBASE-9775
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.96.0
Reporter: Elliott Clark
Priority: Critical
 Attachments: 9775.rig.txt, 9775.rig.v2.patch, 9775.rig.v3.patch, 
 Charts Search   Cloudera Manager - ITBLL.png, Charts Search   Cloudera 
 Manager.png, hbase-9775.patch, job_run.log, short_ycsb.png, ycsb.png, 
 ycsb_insert_94_vs_96.png


 Testing on larger clusters has not had the desired throughput increases.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9890) MR jobs are not working if started by a delegated user

2013-11-07 Thread Francis Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816120#comment-13816120
 ] 

Francis Liu commented on HBASE-9890:


V2 looks good.  +1 on fixing mapred.*.


 MR jobs are not working if started by a delegated user
 --

 Key: HBASE-9890
 URL: https://issues.apache.org/jira/browse/HBASE-9890
 Project: HBase
  Issue Type: Bug
  Components: mapreduce, security
Affects Versions: 0.98.0, 0.94.12, 0.96.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.98.0, 0.94.13, 0.96.1

 Attachments: HBASE-9890-94-v0.patch, HBASE-9890-94-v1.patch, 
 HBASE-9890-v0.patch, HBASE-9890-v1.patch, HBASE-9890-v2.patch


 If Map-Reduce jobs are started with by a proxy user that has already the 
 delegation tokens, we get an exception on obtain token since the proxy user 
 doesn't have the kerberos auth.
 For example:
  * If we use oozie to execute RowCounter - oozie will get the tokens required 
 (HBASE_AUTH_TOKEN) and it will start the RowCounter. Once the RowCounter 
 tries to obtain the token, it will get an exception.
  * If we use oozie to execute LoadIncrementalHFiles - oozie will get the 
 tokens required (HDFS_DELEGATION_TOKEN) and it will start the 
 LoadIncrementalHFiles. Once the LoadIncrementalHFiles tries to obtain the 
 token, it will get an exception.
 {code}
  org.apache.hadoop.hbase.security.AccessDeniedException: Token generation 
 only allowed for Kerberos authenticated clients
 at 
 org.apache.hadoop.hbase.security.token.TokenProvider.getAuthenticationToken(TokenProvider.java:87)
 {code}
 {code}
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
 can be issued only with kerberos or web authentication
   at 
 org.apache.hadoop.hdfs.DFSClient.getDelegationToken(DFSClient.java:783)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getDelegationToken(DistributedFileSystem.java:868)
   at 
 org.apache.hadoop.fs.FileSystem.collectDelegationTokens(FileSystem.java:509)
   at 
 org.apache.hadoop.fs.FileSystem.addDelegationTokens(FileSystem.java:487)
   at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:130)
   at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:111)
   at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:85)
   at 
 org.apache.hadoop.filecache.TrackerDistributedCacheManager.getDelegationTokens(TrackerDistributedCacheManager.java:949)
   at 
 org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:854)
   at 
 org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:743)
   at 
 org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:945)
   at org.apache.hadoop.mapreduce.Job.submit(Job.java:566)
   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:596)
   at 
 org.apache.hadoop.hbase.mapreduce.RowCounter.main(RowCounter.java:173)
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9818) NPE in HFileBlock#AbstractFSReader#readAtOffset

2013-11-07 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816126#comment-13816126
 ] 

Ted Yu commented on HBASE-9818:
---

Unfortunately the change in the trial patch seemed to affect timing.
After 200 iterations, there was still no failure:
{code}
$ grep 'rver.TestHRegion' 9818.out | wc
200 400   11400
{code}

 NPE in HFileBlock#AbstractFSReader#readAtOffset
 ---

 Key: HBASE-9818
 URL: https://issues.apache.org/jira/browse/HBASE-9818
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Ted Yu
 Attachments: 9818-trial.txt, 9818-v2.txt, 9818-v3.txt, 9818-v4.txt, 
 9818-v5.txt


 HFileBlock#istream seems to be null.  I was wondering should we hide 
 FSDataInputStreamWrapper#useHBaseChecksum.
 By the way, this happened when online schema change is enabled (encoding)
 {noformat}
 2013-10-22 10:58:43,321 ERROR [RpcServer.handler=28,port=36020] 
 regionserver.HRegionServer:
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1200)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1436)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:359)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:503)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:553)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:245)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:166)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.enforceSeek(StoreFileScanner.java:361)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.pollRealKV(KeyValueHeap.java:336)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:293)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:258)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:603)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:476)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:129)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3546)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3616)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3494)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3485)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3079)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:27022)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:1979)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:90)
 at 
 org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
 at 
 org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
 at 
 org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
 at java.lang.Thread.run(Thread.java:724)
 2013-10-22 10:58:43,665 ERROR [RpcServer.handler=23,port=36020] 
 regionserver.HRegionServer:
 org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
 nextCallSeq: 53438 But the nextCallSeq got from client: 53437; 
 request=scanner_id: 1252577470624375060 number_of_rows: 100 close_scanner: 
 false next_call_seq: 53437
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3030)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:27022)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:1979)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:90)
 at 
 org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
 at 
 org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
 at 
 

[jira] [Updated] (HBASE-9907) Rig to fake a cluster so can profile client behaviors

2013-11-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-9907:
-

Status: Patch Available  (was: Open)

 Rig to fake a cluster so can profile client behaviors
 -

 Key: HBASE-9907
 URL: https://issues.apache.org/jira/browse/HBASE-9907
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Affects Versions: 0.96.0
Reporter: stack
Assignee: stack
 Fix For: 0.98.0, 0.96.1

 Attachments: 9907.txt, 9907.txt


 Patch carried over from HBASE-9775 parent issue.  Adds to the 
 TestClientNoCluster#main a rig that allows faking many clients against a few 
 servers and the opposite.  Useful for studying client operation.
 Includes a few changes to pb makings to try and save on a few creations.
 Also has an edit of javadoc on how to create an HConnection and HTable trying 
 to be more forceful about pointing you in right direction ([~lhofhansl] -- 
 mind reviewing these javadoc changes?)
 I have a +1 already on this patch up in parent issue.  Will run by hadoopqa 
 to make sure all good before commit.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9907) Rig to fake a cluster so can profile client behaviors

2013-11-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-9907:
-

Status: Open  (was: Patch Available)

 Rig to fake a cluster so can profile client behaviors
 -

 Key: HBASE-9907
 URL: https://issues.apache.org/jira/browse/HBASE-9907
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Affects Versions: 0.96.0
Reporter: stack
Assignee: stack
 Fix For: 0.98.0, 0.96.1

 Attachments: 9907.txt, 9907.txt


 Patch carried over from HBASE-9775 parent issue.  Adds to the 
 TestClientNoCluster#main a rig that allows faking many clients against a few 
 servers and the opposite.  Useful for studying client operation.
 Includes a few changes to pb makings to try and save on a few creations.
 Also has an edit of javadoc on how to create an HConnection and HTable trying 
 to be more forceful about pointing you in right direction ([~lhofhansl] -- 
 mind reviewing these javadoc changes?)
 I have a +1 already on this patch up in parent issue.  Will run by hadoopqa 
 to make sure all good before commit.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9907) Rig to fake a cluster so can profile client behaviors

2013-11-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-9907:
-

Attachment: 9907.txt

Retry for hadoopqa

 Rig to fake a cluster so can profile client behaviors
 -

 Key: HBASE-9907
 URL: https://issues.apache.org/jira/browse/HBASE-9907
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Affects Versions: 0.96.0
Reporter: stack
Assignee: stack
 Fix For: 0.98.0, 0.96.1

 Attachments: 9907.txt, 9907.txt


 Patch carried over from HBASE-9775 parent issue.  Adds to the 
 TestClientNoCluster#main a rig that allows faking many clients against a few 
 servers and the opposite.  Useful for studying client operation.
 Includes a few changes to pb makings to try and save on a few creations.
 Also has an edit of javadoc on how to create an HConnection and HTable trying 
 to be more forceful about pointing you in right direction ([~lhofhansl] -- 
 mind reviewing these javadoc changes?)
 I have a +1 already on this patch up in parent issue.  Will run by hadoopqa 
 to make sure all good before commit.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9890) MR jobs are not working if started by a delegated user

2013-11-07 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816155#comment-13816155
 ] 

Jean-Daniel Cryans commented on HBASE-9890:
---

The mapred package is deprecated in 0.94, meaning it should have been removed 
in 0.96, so I don't think we need the fix there.

 MR jobs are not working if started by a delegated user
 --

 Key: HBASE-9890
 URL: https://issues.apache.org/jira/browse/HBASE-9890
 Project: HBase
  Issue Type: Bug
  Components: mapreduce, security
Affects Versions: 0.98.0, 0.94.12, 0.96.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.98.0, 0.94.13, 0.96.1

 Attachments: HBASE-9890-94-v0.patch, HBASE-9890-94-v1.patch, 
 HBASE-9890-v0.patch, HBASE-9890-v1.patch, HBASE-9890-v2.patch


 If Map-Reduce jobs are started with by a proxy user that has already the 
 delegation tokens, we get an exception on obtain token since the proxy user 
 doesn't have the kerberos auth.
 For example:
  * If we use oozie to execute RowCounter - oozie will get the tokens required 
 (HBASE_AUTH_TOKEN) and it will start the RowCounter. Once the RowCounter 
 tries to obtain the token, it will get an exception.
  * If we use oozie to execute LoadIncrementalHFiles - oozie will get the 
 tokens required (HDFS_DELEGATION_TOKEN) and it will start the 
 LoadIncrementalHFiles. Once the LoadIncrementalHFiles tries to obtain the 
 token, it will get an exception.
 {code}
  org.apache.hadoop.hbase.security.AccessDeniedException: Token generation 
 only allowed for Kerberos authenticated clients
 at 
 org.apache.hadoop.hbase.security.token.TokenProvider.getAuthenticationToken(TokenProvider.java:87)
 {code}
 {code}
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
 can be issued only with kerberos or web authentication
   at 
 org.apache.hadoop.hdfs.DFSClient.getDelegationToken(DFSClient.java:783)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getDelegationToken(DistributedFileSystem.java:868)
   at 
 org.apache.hadoop.fs.FileSystem.collectDelegationTokens(FileSystem.java:509)
   at 
 org.apache.hadoop.fs.FileSystem.addDelegationTokens(FileSystem.java:487)
   at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:130)
   at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:111)
   at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:85)
   at 
 org.apache.hadoop.filecache.TrackerDistributedCacheManager.getDelegationTokens(TrackerDistributedCacheManager.java:949)
   at 
 org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:854)
   at 
 org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:743)
   at 
 org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:945)
   at org.apache.hadoop.mapreduce.Job.submit(Job.java:566)
   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:596)
   at 
 org.apache.hadoop.hbase.mapreduce.RowCounter.main(RowCounter.java:173)
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9907) Rig to fake a cluster so can profile client behaviors

2013-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816159#comment-13816159
 ] 

Hadoop QA commented on HBASE-9907:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12612639/9907.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7778//console

This message is automatically generated.

 Rig to fake a cluster so can profile client behaviors
 -

 Key: HBASE-9907
 URL: https://issues.apache.org/jira/browse/HBASE-9907
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Affects Versions: 0.96.0
Reporter: stack
Assignee: stack
 Fix For: 0.98.0, 0.96.1

 Attachments: 9907.txt, 9907.txt


 Patch carried over from HBASE-9775 parent issue.  Adds to the 
 TestClientNoCluster#main a rig that allows faking many clients against a few 
 servers and the opposite.  Useful for studying client operation.
 Includes a few changes to pb makings to try and save on a few creations.
 Also has an edit of javadoc on how to create an HConnection and HTable trying 
 to be more forceful about pointing you in right direction ([~lhofhansl] -- 
 mind reviewing these javadoc changes?)
 I have a +1 already on this patch up in parent issue.  Will run by hadoopqa 
 to make sure all good before commit.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9792) Region states should update last assignments when a region is opened.

2013-11-07 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-9792:
---

   Resolution: Fixed
Fix Version/s: 0.96.1
   0.98.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Integrated into trunk and 0.96.  Thanks Stack and Sergey for the review.

 Region states should update last assignments when a region is opened.
 -

 Key: HBASE-9792
 URL: https://issues.apache.org/jira/browse/HBASE-9792
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.98.0, 0.96.1

 Attachments: trunk-9792.patch, trunk-9792_v2.patch, 
 trunk-9792_v3.1.patch, trunk-9792_v3.patch


 Currently, we update a region's last assignment region server when the region 
 is online.  We should do this sooner, when the region is moved to OPEN state. 
  CM could kill this region server before we delete the znode and online the 
 region.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9818) NPE in HFileBlock#AbstractFSReader#readAtOffset

2013-11-07 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816191#comment-13816191
 ] 

Ted Yu commented on HBASE-9818:
---

TestHFileBlock#testConcurrentReading shows how multiple FSReader's use the same 
FSDataInputStreamWrapper

Another solution I can think of is to introduce ref counting for the streams 
FSDataInputStreamWrapper wraps. The FSDataInputStreamWrapper#close() would 
decrement the count and would close the underlying stream if count reaches 0.

 NPE in HFileBlock#AbstractFSReader#readAtOffset
 ---

 Key: HBASE-9818
 URL: https://issues.apache.org/jira/browse/HBASE-9818
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Ted Yu
 Attachments: 9818-trial.txt, 9818-v2.txt, 9818-v3.txt, 9818-v4.txt, 
 9818-v5.txt


 HFileBlock#istream seems to be null.  I was wondering should we hide 
 FSDataInputStreamWrapper#useHBaseChecksum.
 By the way, this happened when online schema change is enabled (encoding)
 {noformat}
 2013-10-22 10:58:43,321 ERROR [RpcServer.handler=28,port=36020] 
 regionserver.HRegionServer:
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1200)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1436)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:359)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:503)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:553)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:245)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:166)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.enforceSeek(StoreFileScanner.java:361)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.pollRealKV(KeyValueHeap.java:336)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:293)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:258)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:603)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:476)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:129)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3546)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3616)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3494)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3485)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3079)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:27022)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:1979)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:90)
 at 
 org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
 at 
 org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
 at 
 org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
 at java.lang.Thread.run(Thread.java:724)
 2013-10-22 10:58:43,665 ERROR [RpcServer.handler=23,port=36020] 
 regionserver.HRegionServer:
 org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
 nextCallSeq: 53438 But the nextCallSeq got from client: 53437; 
 request=scanner_id: 1252577470624375060 number_of_rows: 100 close_scanner: 
 false next_call_seq: 53437
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3030)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:27022)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:1979)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:90)
 at 
 org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
 at 
 

[jira] [Updated] (HBASE-8741) Scope sequenceid to the region rather than regionserver (WAS: Mutations on Regions in recovery mode might have same sequenceIDs)

2013-11-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-8741:
-

  Resolution: Fixed
Release Note: Sequenceids are now per region rather than per RegionServer.
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to trunk.  Thanks for the nice fixup [~himan...@cloudera.com]

 Scope sequenceid to the region rather than regionserver (WAS: Mutations on 
 Regions in recovery mode might have same sequenceIDs)
 

 Key: HBASE-8741
 URL: https://issues.apache.org/jira/browse/HBASE-8741
 Project: HBase
  Issue Type: Bug
  Components: MTTR
Affects Versions: 0.95.1
Reporter: Himanshu Vashishtha
Assignee: Himanshu Vashishtha
 Fix For: 0.98.0

 Attachments: HBASE-8741-trunk-v6.1-rebased.patch, 
 HBASE-8741-trunk-v6.2.1.patch, HBASE-8741-trunk-v6.2.2.patch, 
 HBASE-8741-trunk-v6.2.2.patch, HBASE-8741-trunk-v6.3.patch, 
 HBASE-8741-trunk-v6.4.patch, HBASE-8741-trunk-v6.patch, HBASE-8741-v0.patch, 
 HBASE-8741-v2.patch, HBASE-8741-v3.patch, HBASE-8741-v4-again.patch, 
 HBASE-8741-v4-again.patch, HBASE-8741-v4.patch, HBASE-8741-v5-again.patch, 
 HBASE-8741-v5.patch


 Currently, when opening a region, we find the maximum sequence ID from all 
 its HFiles and then set the LogSequenceId of the log (in case the later is at 
 a small value). This works good in recovered.edits case as we are not writing 
 to the region until we have replayed all of its previous edits. 
 With distributed log replay, if we want to enable writes while a region is 
 under recovery, we need to make sure that the logSequenceId  maximum 
 logSequenceId of the old regionserver. Otherwise, we might have a situation 
 where new edits have same (or smaller) sequenceIds. 
 We can store region level information in the WALTrailer, than this scenario 
 could be avoided by:
 a) reading the trailer of the last completed file, i.e., last wal file 
 which has a trailer and,
 b) completely reading the last wal file (this file would not have the 
 trailer, so it needs to be read completely).
 In future, if we switch to multi wal file, we could read the trailer for all 
 completed WAL files, and reading the remaining incomplete files.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9893) Incorrect assert condition in OrderedBytes decoding

2013-11-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816204#comment-13816204
 ] 

stack commented on HBASE-9893:
--

Patch lgtm (or remove the assert since this seems to be working around a 
mis-assertion?)  What you think [~ndimiduk]?

 Incorrect assert condition in OrderedBytes decoding
 ---

 Key: HBASE-9893
 URL: https://issues.apache.org/jira/browse/HBASE-9893
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.96.0
Reporter: He Liangliang
Assignee: Nick Dimiduk
Priority: Minor
 Attachments: HBASE-9893.patch


 The following assert condition is incorrect when decoding blob var byte array.
 {code}
 assert t == 0 : Unexpected bits remaining after decoding blob.;
 {code}
 When the number of bytes to decode is multiples of 8 (i.e the original number 
 of bytes is multiples of 7), this assert may fail.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9909) TestHFilePerformance should not be a unit test, but a tool

2013-11-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816206#comment-13816206
 ] 

stack commented on HBASE-9909:
--

+1 on this for now for trunk and 0.96.

 TestHFilePerformance should not be a unit test, but a tool
 --

 Key: HBASE-9909
 URL: https://issues.apache.org/jira/browse/HBASE-9909
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.98.0, 0.96.1

 Attachments: hbase-9909_v1.patch


 TestHFilePerformance is a very old test, which does not test anything, but a 
 perf evaluation tool. It is not clear to me whether there is any utility for 
 keeping it, but that should at least be converted to be a tool. 
 Note that TestHFile already covers the unit test cases (writing hfile with 
 none and gz compression). We do not need to test SequenceFile. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9892) Add info port to ServerName to support multi instances in a node

2013-11-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816208#comment-13816208
 ] 

stack commented on HBASE-9892:
--

[~liushaohui] For trunk, removal of HBASE-7027 and replacing it with a pb'd and 
trunk version of the attached patch would be best going forward.  For 0.94, 
yes, ServerLoad is versioned but changing it you will be into a thorny area for 
you will have to be able to demonstrate you have not for sure broken rolling 
updates.  Thanks for working on this ugly issue.

 Add info port to ServerName to support multi instances in a node
 

 Key: HBASE-9892
 URL: https://issues.apache.org/jira/browse/HBASE-9892
 Project: HBase
  Issue Type: Improvement
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Minor
 Attachments: HBASE-9892-0.94-v1.diff, HBASE-9892-0.94-v2.diff, 
 HBASE-9892-0.94-v3.diff


 The full GC time of  regionserver with big heap( 30G ) usually  can not be 
 controlled in 30s. At the same time, the servers with 64G memory are normal. 
 So we try to deploy multi rs instances(2-3 ) in a single node and the heap of 
 each rs is about 20G ~ 24G.
 Most of the things works fine, except the hbase web ui. The master get the RS 
 info port from conf, which is suitable for this situation of multi rs  
 instances in a node. So we add info port to ServerName.
 a. at the startup, rs report it's info port to Hmaster.
 b, For root region, rs write the servername with info port ro the zookeeper 
 root-region-server node.
 c, For meta regions, rs write the servername with info port to root region 
 d. For user regions,  rs write the servername with info port to meta regions 
 So hmaster and client can get info port from the servername.
 To test this feature, I change the rs num from 1 to 3 in standalone mode, so 
 we can test it in standalone mode,
 I think Hoya(hbase on yarn) will encounter the same problem.  Anyone knows 
 how Hoya handle this problem?
 PS: There are  different formats for servername in zk node and meta table, i 
 think we need to unify it and refactor the code.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9906) Restore snapshot fails to restore the meta edits sporadically

2013-11-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816212#comment-13816212
 ] 

stack commented on HBASE-9906:
--

+1 Ugly but you call it out as so in comment on why.

 Restore snapshot fails to restore the meta edits sporadically  
 ---

 Key: HBASE-9906
 URL: https://issues.apache.org/jira/browse/HBASE-9906
 Project: HBase
  Issue Type: New Feature
  Components: snapshots
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: hbase-9906-0.94_v1.patch, hbase-9906_v1.patch


 After snaphot restore, we see failures to find the table in meta:
 {code}
  disable 'tablefour'
  restore_snapshot 'snapshot_tablefour'
  enable 'tablefour'
 ERROR: Table tablefour does not exist.'
 {code}
 This is quite subtle. From the looks of it, we successfully restore the 
 snapshot, do the meta updates, return to the client about the status. The 
 client then tries to do an operation for the table (like enable table, or 
 scan in the test outputs) which fails because the meta entry for the region 
 seems to be gone (in case of single region, the table will be reported 
 missing). Subsequent attempts for creating the table will also fail because 
 the table directories will be there, but not the meta entries.
 For restoring meta entries, we are doing a delete then a put to the same 
 region:
 {code}
 2013-11-04 10:39:51,582 INFO 
 org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper: region to restore: 
 76d0e2b7ec3291afcaa82e18a56ccc30
 2013-11-04 10:39:51,582 INFO 
 org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper: region to remove: 
 fa41edf43fe3ee131db4a34b848ff432
 ...
 2013-11-04 10:39:52,102 INFO org.apache.hadoop.hbase.catalog.MetaEditor: 
 Deleted [{ENCODED = fa41edf43fe3ee131db4a34b848ff432, NAME = 
 'tablethree_mod,,1383559723345.fa41edf43fe3ee131db4a34b848ff432.', STARTKEY 
 = '', ENDKEY = ''}, {ENCODED = 76d0e2b7ec3291afcaa82e18a56ccc30, NAME = 
 'tablethree_mod,,1383561123097.76d0e2b7ec3291afcaa82e18a56ccc30.', STARTKE
 2013-11-04 10:39:52,111 INFO org.apache.hadoop.hbase.catalog.MetaEditor: 
 Added 1
 {code}
 The root cause for this sporadic failure is that, the delete and subsequent 
 put will have the same timestamp if they execute in the same ms. The delete 
 will override the put in the same ts, even though the put have a larger ts.
 See: HBASE-9905, HBASE-8770
 Credit goes to [~huned] for reporting this bug. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9778) Avoid seeking to next column in ExplicitColumnTracker when possible

2013-11-07 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816223#comment-13816223
 ] 

Vladimir Rodionov commented on HBASE-9778:
--

Good performance increase, indeed. Hinting is not that bad idea. Advanced users 
(Phoenix) will definitely find a way how to use this optimization.  

 Avoid seeking to next column in ExplicitColumnTracker when possible
 ---

 Key: HBASE-9778
 URL: https://issues.apache.org/jira/browse/HBASE-9778
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 9778-0.94-v2.txt, 9778-0.94-v3.txt, 9778-0.94-v4.txt, 
 9778-0.94.txt, 9778-trunk-v2.txt, 9778-trunk-v3.txt, 9778-trunk.txt


 The issue of slow seeking in ExplicitColumnTracker was brought up by 
 [~vrodionov] on the dev list.
 My idea here is to avoid the seeking if we know that there aren't many 
 versions to skip.
 How do we know? We'll use the column family's VERSIONS setting as a hint. If 
 VERSIONS is set to 1 (or maybe some value  10) we'll avoid the seek and call 
 SKIP repeatedly.
 HBASE-9769 has some initial number for this approach:
 Interestingly it depends on which column(s) is (are) selected.
 Some numbers: 4m rows, 5 cols each, 1 cf, 10 bytes values, VERSIONS=1, 
 everything filtered at the server with a ValueFilter. Everything measured in 
 seconds.
 Without patch:
 ||Wildcard||Col 1||Col 2||Col 4||Col 5||Col 2+4||
 |6.4|8.5|14.3|14.6|11.1|20.3|
 With patch:
 ||Wildcard||Col 1||Col 2||Col 4||Col 5||Col 2+4||
 |6.4|8.4|8.9|9.9|6.4|10.0|
 Variation here was +- 0.2s.
 So with this patch scanning is 2x faster than without in some cases, and 
 never slower. No special hint needed, beyond declaring VERSIONS correctly.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-8770) deletes and puts with the same ts should be resolved according to mvcc/seqNum

2013-11-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-8770:
-

Priority: Critical  (was: Major)

Lets do this.

 deletes and puts with the same ts should be resolved according to mvcc/seqNum
 -

 Key: HBASE-8770
 URL: https://issues.apache.org/jira/browse/HBASE-8770
 Project: HBase
  Issue Type: Brainstorming
Reporter: Sergey Shelukhin
Priority: Critical

 This came up during HBASE-8721. Puts with the same ts are resolved by seqNum. 
 It's not clear why deletes with the same ts as a put should always mask the 
 put, rather than also being resolve by seqNum.
 What do you think?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9047) Tool to handle finishing replication when the cluster is offline

2013-11-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-9047:
-

Attachment: HBASE-9047-trunk-v4.patch

Retry

 Tool to handle finishing replication when the cluster is offline
 

 Key: HBASE-9047
 URL: https://issues.apache.org/jira/browse/HBASE-9047
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.96.0
Reporter: Jean-Daniel Cryans
Assignee: Demai Ni
 Fix For: 0.98.0

 Attachments: HBASE-9047-0.94.9-v0.PATCH, HBASE-9047-trunk-v0.patch, 
 HBASE-9047-trunk-v1.patch, HBASE-9047-trunk-v2.patch, 
 HBASE-9047-trunk-v3.patch, HBASE-9047-trunk-v4.patch, 
 HBASE-9047-trunk-v4.patch


 We're having a discussion on the mailing list about replicating the data on a 
 cluster that was shut down in an offline fashion. The motivation could be 
 that you don't want to bring HBase back up but still need that data on the 
 slave.
 So I have this idea of a tool that would be running on the master cluster 
 while it is down, although it could also run at any time. Basically it would 
 be able to read the replication state of each master region server, finish 
 replicating what's missing to all the slave, and then clear that state in 
 zookeeper.
 The code that handles replication does most of that already, see 
 ReplicationSourceManager and ReplicationSource. Basically when 
 ReplicationSourceManager.init() is called, it will check all the queues in ZK 
 and try to grab those that aren't attached to a region server. If the whole 
 cluster is down, it will grab all of them.
 The beautiful thing here is that you could start that tool on all your 
 machines and the load will be spread out, but that might not be a big concern 
 if replication wasn't lagging since it would take a few seconds to finish 
 replicating the missing data for each region server.
 I'm guessing when starting ReplicationSourceManager you'd give it a fake 
 region server ID, and you'd tell it not to start its own source.
 FWIW the main difference in how replication is handled between Apache's HBase 
 and Facebook's is that the latter is always done separately of HBase itself. 
 This jira isn't about doing that.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-8770) deletes and puts with the same ts should be resolved according to mvcc/seqNum

2013-11-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-8770:
-

Fix Version/s: 0.98.0

It would be a change in behavior so should go in in a major version.

 deletes and puts with the same ts should be resolved according to mvcc/seqNum
 -

 Key: HBASE-8770
 URL: https://issues.apache.org/jira/browse/HBASE-8770
 Project: HBase
  Issue Type: Brainstorming
Reporter: Sergey Shelukhin
Priority: Critical
 Fix For: 0.98.0


 This came up during HBASE-8721. Puts with the same ts are resolved by seqNum. 
 It's not clear why deletes with the same ts as a put should always mask the 
 put, rather than also being resolve by seqNum.
 What do you think?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9907) Rig to fake a cluster so can profile client behaviors

2013-11-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-9907:
-

Attachment: 9907v2.txt

Rebase

 Rig to fake a cluster so can profile client behaviors
 -

 Key: HBASE-9907
 URL: https://issues.apache.org/jira/browse/HBASE-9907
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Affects Versions: 0.96.0
Reporter: stack
Assignee: stack
 Fix For: 0.98.0, 0.96.1

 Attachments: 9907.txt, 9907.txt, 9907v2.txt


 Patch carried over from HBASE-9775 parent issue.  Adds to the 
 TestClientNoCluster#main a rig that allows faking many clients against a few 
 servers and the opposite.  Useful for studying client operation.
 Includes a few changes to pb makings to try and save on a few creations.
 Also has an edit of javadoc on how to create an HConnection and HTable trying 
 to be more forceful about pointing you in right direction ([~lhofhansl] -- 
 mind reviewing these javadoc changes?)
 I have a +1 already on this patch up in parent issue.  Will run by hadoopqa 
 to make sure all good before commit.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9890) MR jobs are not working if started by a delegated user

2013-11-07 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816250#comment-13816250
 ] 

Nick Dimiduk commented on HBASE-9890:
-

The mapred package is marked deprecated because it's marked deprecated in 
Hadoop. We should *not* drop support for it until Hadoop does. Please make sure 
this feature is compatible in both implementations. If you can recommend paths 
to refactor so as to reduce code duplication, I'm all ears :)

 MR jobs are not working if started by a delegated user
 --

 Key: HBASE-9890
 URL: https://issues.apache.org/jira/browse/HBASE-9890
 Project: HBase
  Issue Type: Bug
  Components: mapreduce, security
Affects Versions: 0.98.0, 0.94.12, 0.96.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.98.0, 0.94.13, 0.96.1

 Attachments: HBASE-9890-94-v0.patch, HBASE-9890-94-v1.patch, 
 HBASE-9890-v0.patch, HBASE-9890-v1.patch, HBASE-9890-v2.patch


 If Map-Reduce jobs are started with by a proxy user that has already the 
 delegation tokens, we get an exception on obtain token since the proxy user 
 doesn't have the kerberos auth.
 For example:
  * If we use oozie to execute RowCounter - oozie will get the tokens required 
 (HBASE_AUTH_TOKEN) and it will start the RowCounter. Once the RowCounter 
 tries to obtain the token, it will get an exception.
  * If we use oozie to execute LoadIncrementalHFiles - oozie will get the 
 tokens required (HDFS_DELEGATION_TOKEN) and it will start the 
 LoadIncrementalHFiles. Once the LoadIncrementalHFiles tries to obtain the 
 token, it will get an exception.
 {code}
  org.apache.hadoop.hbase.security.AccessDeniedException: Token generation 
 only allowed for Kerberos authenticated clients
 at 
 org.apache.hadoop.hbase.security.token.TokenProvider.getAuthenticationToken(TokenProvider.java:87)
 {code}
 {code}
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
 can be issued only with kerberos or web authentication
   at 
 org.apache.hadoop.hdfs.DFSClient.getDelegationToken(DFSClient.java:783)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getDelegationToken(DistributedFileSystem.java:868)
   at 
 org.apache.hadoop.fs.FileSystem.collectDelegationTokens(FileSystem.java:509)
   at 
 org.apache.hadoop.fs.FileSystem.addDelegationTokens(FileSystem.java:487)
   at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:130)
   at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:111)
   at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:85)
   at 
 org.apache.hadoop.filecache.TrackerDistributedCacheManager.getDelegationTokens(TrackerDistributedCacheManager.java:949)
   at 
 org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:854)
   at 
 org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:743)
   at 
 org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:945)
   at org.apache.hadoop.mapreduce.Job.submit(Job.java:566)
   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:596)
   at 
 org.apache.hadoop.hbase.mapreduce.RowCounter.main(RowCounter.java:173)
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9904) Solve skipping data in HTable scans

2013-11-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-9904:
-

 Priority: Critical  (was: Major)
Fix Version/s: 0.98.0

We should try Manukranth's unit test against trunk.  [~jxiang] You might be 
interested in this finding?

 Solve skipping data in HTable scans
 ---

 Key: HBASE-9904
 URL: https://issues.apache.org/jira/browse/HBASE-9904
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.89-fb
Reporter: Manukranth Kolloju
Priority: Critical
 Fix For: 0.89-fb, 0.98.0

 Attachments: scan.diff


 The HTable client cannot retry a scan operation in the 
 getRegionServerWithRetries code path.
 This will result in the client missing data. This can be worked around using 
 hbase.client.retries.number to 1.
 The whole problem is that Callable knows nothing about retries and the 
 protocol it dances to as well doesn't support retires.
 This fix will keep Callable protocol (ugly thing worth merciless refactoring) 
 intact but will change
 ScannerCallable to anticipate retries. What we want is to make failed 
 operations to be identities for outside world:
 N1 , N2 , F3 , N3 , F4 , F4 , N4 ... = N1 , N2 , N3 , N4 ...
 where Nk are successful operation and Fk are failed operations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9909) TestHFilePerformance should not be a unit test, but a tool

2013-11-07 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816283#comment-13816283
 ] 

Enis Soztutar commented on HBASE-9909:
--

bq. Should they be merged then? I guess you will say yes, but on a separate 
JIRA? 
bq. Opened HBASE-9910 and HBASE-9911.
You got me! Thanks for opening those.


 TestHFilePerformance should not be a unit test, but a tool
 --

 Key: HBASE-9909
 URL: https://issues.apache.org/jira/browse/HBASE-9909
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.98.0, 0.96.1

 Attachments: hbase-9909_v1.patch


 TestHFilePerformance is a very old test, which does not test anything, but a 
 perf evaluation tool. It is not clear to me whether there is any utility for 
 keeping it, but that should at least be converted to be a tool. 
 Note that TestHFile already covers the unit test cases (writing hfile with 
 none and gz compression). We do not need to test SequenceFile. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9909) TestHFilePerformance should not be a unit test, but a tool

2013-11-07 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-9909:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed this to trunk and 0.96(test only). Thanks for looking Stack and JM. 

 TestHFilePerformance should not be a unit test, but a tool
 --

 Key: HBASE-9909
 URL: https://issues.apache.org/jira/browse/HBASE-9909
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.98.0, 0.96.1

 Attachments: hbase-9909_v1.patch


 TestHFilePerformance is a very old test, which does not test anything, but a 
 perf evaluation tool. It is not clear to me whether there is any utility for 
 keeping it, but that should at least be converted to be a tool. 
 Note that TestHFile already covers the unit test cases (writing hfile with 
 none and gz compression). We do not need to test SequenceFile. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9904) Solve skipping data in HTable scans

2013-11-07 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816295#comment-13816295
 ] 

Ted Yu commented on HBASE-9904:
---

{code}
+  private void fixStateOrCancelRetryThanRethrow(IOException e) throws 
IOException {
{code}
There seems to be a typo above: Than - Then
{code}
+  throw new DoNotRetryIOException(Reset scanner, ioe);
+} else {
{code}
nit: the 'else' keyword is not needed.
{code}
+ * Copyright 2010 The Apache Software Foundation
{code}
Year is not needed.
{code}
+ */
+public class TestScanRetries {
{code}
Please add test category.
{code}
--- 
hadoop/branches/titan/VENDOR.hbase/hbase-trunk/src/main/java/org/apache/hadoop/hbase/client/ScannerCallable.java
{code}
Hadoop QA accepts level 0 and level 1 patches only.

 Solve skipping data in HTable scans
 ---

 Key: HBASE-9904
 URL: https://issues.apache.org/jira/browse/HBASE-9904
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.89-fb
Reporter: Manukranth Kolloju
Priority: Critical
 Fix For: 0.89-fb, 0.98.0

 Attachments: scan.diff


 The HTable client cannot retry a scan operation in the 
 getRegionServerWithRetries code path.
 This will result in the client missing data. This can be worked around using 
 hbase.client.retries.number to 1.
 The whole problem is that Callable knows nothing about retries and the 
 protocol it dances to as well doesn't support retires.
 This fix will keep Callable protocol (ugly thing worth merciless refactoring) 
 intact but will change
 ScannerCallable to anticipate retries. What we want is to make failed 
 operations to be identities for outside world:
 N1 , N2 , F3 , N3 , F4 , F4 , N4 ... = N1 , N2 , N3 , N4 ...
 where Nk are successful operation and Fk are failed operations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9808) org.apache.hadoop.hbase.rest.PerformanceEvaluation is out of sync with org.apache.hadoop.hbase.PerformanceEvaluation

2013-11-07 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816296#comment-13816296
 ] 

Nick Dimiduk commented on HBASE-9808:
-

{noformat}
+scan.setCaching(100);
{noformat}

Just omit this line entirely. This can be done on commit -- same with the WS.

Looks good otherwise. +1.

 org.apache.hadoop.hbase.rest.PerformanceEvaluation is out of sync with 
 org.apache.hadoop.hbase.PerformanceEvaluation
 

 Key: HBASE-9808
 URL: https://issues.apache.org/jira/browse/HBASE-9808
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Gustavo Anatoly
 Attachments: HBASE-9808-v1.patch, HBASE-9808-v2.patch, 
 HBASE-9808.patch


 Here is list of JIRAs whose fixes might have gone into 
 rest.PerformanceEvaluation :
 {code}
 
 r1527817 | mbertozzi | 2013-09-30 15:57:44 -0700 (Mon, 30 Sep 2013) | 1 line
 HBASE-9663 PerformanceEvaluation does not properly honor specified table name 
 parameter
 
 r1526452 | mbertozzi | 2013-09-26 04:58:50 -0700 (Thu, 26 Sep 2013) | 1 line
 HBASE-9662 PerformanceEvaluation input do not handle tags properties
 
 r1525269 | ramkrishna | 2013-09-21 11:01:32 -0700 (Sat, 21 Sep 2013) | 3 lines
 HBASE-8496 - Implement tags and the internals of how a tag should look like 
 (Ram)
 
 r1524985 | nkeywal | 2013-09-20 06:02:54 -0700 (Fri, 20 Sep 2013) | 1 line
 HBASE-9558  PerformanceEvaluation is in hbase-server, and creates a 
 dependency to MiniDFSCluster
 
 r1523782 | nkeywal | 2013-09-16 13:07:13 -0700 (Mon, 16 Sep 2013) | 1 line
 HBASE-9521  clean clearBufferOnFail behavior and deprecate it
 
 r1518341 | jdcryans | 2013-08-28 12:46:55 -0700 (Wed, 28 Aug 2013) | 2 lines
 HBASE-9330 Refactor PE to create HTable the correct way
 {code}
 Long term, we may consider consolidating the two PerformanceEvaluation 
 classes so that such maintenance work can be reduced.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-3787) Increment is non-idempotent but client retries RPC

2013-11-07 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816297#comment-13816297
 ] 

Sergey Shelukhin commented on HBASE-3787:
-

any takers for review? This is a huge pita to rebase when it becomes stale

 Increment is non-idempotent but client retries RPC
 --

 Key: HBASE-3787
 URL: https://issues.apache.org/jira/browse/HBASE-3787
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.94.4, 0.95.2
Reporter: dhruba borthakur
Assignee: Sergey Shelukhin
Priority: Blocker
 Attachments: HBASE-3787-partial.patch, HBASE-3787-v0.patch, 
 HBASE-3787-v1.patch, HBASE-3787-v2.patch, HBASE-3787-v3.patch, 
 HBASE-3787-v4.patch, HBASE-3787-v5.patch, HBASE-3787-v5.patch, 
 HBASE-3787-v6.patch, HBASE-3787-v7.patch, HBASE-3787-v8.patch


 The HTable.increment() operation is non-idempotent. The client retries the 
 increment RPC a few times (as specified by configuration) before throwing an 
 error to the application. This makes it possible that the same increment call 
 be applied twice at the server.
 For increment operations, is it better to use 
 HConnectionManager.getRegionServerWithoutRetries()? Another  option would be 
 to enhance the IPC module to make the RPC server correctly identify if the 
 RPC is a retry attempt and handle accordingly.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9808) org.apache.hadoop.hbase.rest.PerformanceEvaluation is out of sync with org.apache.hadoop.hbase.PerformanceEvaluation

2013-11-07 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816298#comment-13816298
 ] 

Nick Dimiduk commented on HBASE-9808:
-

I don't see this class listed in the findbugs results; I don't think it's this 
patch. Not sure about the javadoc.

 org.apache.hadoop.hbase.rest.PerformanceEvaluation is out of sync with 
 org.apache.hadoop.hbase.PerformanceEvaluation
 

 Key: HBASE-9808
 URL: https://issues.apache.org/jira/browse/HBASE-9808
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Gustavo Anatoly
 Attachments: HBASE-9808-v1.patch, HBASE-9808-v2.patch, 
 HBASE-9808.patch


 Here is list of JIRAs whose fixes might have gone into 
 rest.PerformanceEvaluation :
 {code}
 
 r1527817 | mbertozzi | 2013-09-30 15:57:44 -0700 (Mon, 30 Sep 2013) | 1 line
 HBASE-9663 PerformanceEvaluation does not properly honor specified table name 
 parameter
 
 r1526452 | mbertozzi | 2013-09-26 04:58:50 -0700 (Thu, 26 Sep 2013) | 1 line
 HBASE-9662 PerformanceEvaluation input do not handle tags properties
 
 r1525269 | ramkrishna | 2013-09-21 11:01:32 -0700 (Sat, 21 Sep 2013) | 3 lines
 HBASE-8496 - Implement tags and the internals of how a tag should look like 
 (Ram)
 
 r1524985 | nkeywal | 2013-09-20 06:02:54 -0700 (Fri, 20 Sep 2013) | 1 line
 HBASE-9558  PerformanceEvaluation is in hbase-server, and creates a 
 dependency to MiniDFSCluster
 
 r1523782 | nkeywal | 2013-09-16 13:07:13 -0700 (Mon, 16 Sep 2013) | 1 line
 HBASE-9521  clean clearBufferOnFail behavior and deprecate it
 
 r1518341 | jdcryans | 2013-08-28 12:46:55 -0700 (Wed, 28 Aug 2013) | 2 lines
 HBASE-9330 Refactor PE to create HTable the correct way
 {code}
 Long term, we may consider consolidating the two PerformanceEvaluation 
 classes so that such maintenance work can be reduced.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9904) Solve skipping data in HTable scans

2013-11-07 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816300#comment-13816300
 ] 

Jimmy Xiang commented on HBASE-9904:


In trunk, we use call sequence number to prevent such thing. Due to retry, if 
the client is not on the same page with the region server, it will get an 
exception instead of next batch of data. This test works fine with trunk.  So I 
don't think this is an issue with trunk any more.

 Solve skipping data in HTable scans
 ---

 Key: HBASE-9904
 URL: https://issues.apache.org/jira/browse/HBASE-9904
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.89-fb
Reporter: Manukranth Kolloju
Priority: Critical
 Fix For: 0.89-fb, 0.98.0

 Attachments: scan.diff


 The HTable client cannot retry a scan operation in the 
 getRegionServerWithRetries code path.
 This will result in the client missing data. This can be worked around using 
 hbase.client.retries.number to 1.
 The whole problem is that Callable knows nothing about retries and the 
 protocol it dances to as well doesn't support retires.
 This fix will keep Callable protocol (ugly thing worth merciless refactoring) 
 intact but will change
 ScannerCallable to anticipate retries. What we want is to make failed 
 operations to be identities for outside world:
 N1 , N2 , F3 , N3 , F4 , F4 , N4 ... = N1 , N2 , N3 , N4 ...
 where Nk are successful operation and Fk are failed operations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9910) TestHFilePerformance and HFilePerformanceEvaluation should be merged in a single HFile performance test class.

2013-11-07 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9910:


Component/s: test

 TestHFilePerformance and HFilePerformanceEvaluation should be merged in a 
 single HFile performance test class.
 --

 Key: HBASE-9910
 URL: https://issues.apache.org/jira/browse/HBASE-9910
 Project: HBase
  Issue Type: Bug
  Components: Performance, test
Reporter: Jean-Marc Spaggiari

 Today TestHFilePerformance and HFilePerformanceEvaluation are doing slightly 
 different kind of performance tests both for the HFile. We should consider 
 merging those 2 tests in a single class.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HBASE-9915) Severe performance bug: isSeeked() in EncodedScannerV2 is always false

2013-11-07 Thread Lars Hofhansl (JIRA)
Lars Hofhansl created HBASE-9915:


 Summary: Severe performance bug: isSeeked() in EncodedScannerV2 is 
always false
 Key: HBASE-9915
 URL: https://issues.apache.org/jira/browse/HBASE-9915
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
 Fix For: 0.98.0, 0.96.1, 0.94.14






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9890) MR jobs are not working if started by a delegated user

2013-11-07 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-9890:
---

Attachment: HBASE-9890-v3.patch

 MR jobs are not working if started by a delegated user
 --

 Key: HBASE-9890
 URL: https://issues.apache.org/jira/browse/HBASE-9890
 Project: HBase
  Issue Type: Bug
  Components: mapreduce, security
Affects Versions: 0.98.0, 0.94.12, 0.96.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.98.0, 0.94.13, 0.96.1

 Attachments: HBASE-9890-94-v0.patch, HBASE-9890-94-v1.patch, 
 HBASE-9890-v0.patch, HBASE-9890-v1.patch, HBASE-9890-v2.patch, 
 HBASE-9890-v3.patch


 If Map-Reduce jobs are started with by a proxy user that has already the 
 delegation tokens, we get an exception on obtain token since the proxy user 
 doesn't have the kerberos auth.
 For example:
  * If we use oozie to execute RowCounter - oozie will get the tokens required 
 (HBASE_AUTH_TOKEN) and it will start the RowCounter. Once the RowCounter 
 tries to obtain the token, it will get an exception.
  * If we use oozie to execute LoadIncrementalHFiles - oozie will get the 
 tokens required (HDFS_DELEGATION_TOKEN) and it will start the 
 LoadIncrementalHFiles. Once the LoadIncrementalHFiles tries to obtain the 
 token, it will get an exception.
 {code}
  org.apache.hadoop.hbase.security.AccessDeniedException: Token generation 
 only allowed for Kerberos authenticated clients
 at 
 org.apache.hadoop.hbase.security.token.TokenProvider.getAuthenticationToken(TokenProvider.java:87)
 {code}
 {code}
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
 can be issued only with kerberos or web authentication
   at 
 org.apache.hadoop.hdfs.DFSClient.getDelegationToken(DFSClient.java:783)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getDelegationToken(DistributedFileSystem.java:868)
   at 
 org.apache.hadoop.fs.FileSystem.collectDelegationTokens(FileSystem.java:509)
   at 
 org.apache.hadoop.fs.FileSystem.addDelegationTokens(FileSystem.java:487)
   at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:130)
   at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:111)
   at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:85)
   at 
 org.apache.hadoop.filecache.TrackerDistributedCacheManager.getDelegationTokens(TrackerDistributedCacheManager.java:949)
   at 
 org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:854)
   at 
 org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:743)
   at 
 org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:945)
   at org.apache.hadoop.mapreduce.Job.submit(Job.java:566)
   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:596)
   at 
 org.apache.hadoop.hbase.mapreduce.RowCounter.main(RowCounter.java:173)
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9904) Solve skipping data in HTable scans

2013-11-07 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-9904:
---

Attachment: TestScanRetries.java

I attached the revised TestScanRetries.java that works with trunk, just changed 
it to fit with the new interfaces.

 Solve skipping data in HTable scans
 ---

 Key: HBASE-9904
 URL: https://issues.apache.org/jira/browse/HBASE-9904
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.89-fb
Reporter: Manukranth Kolloju
Priority: Critical
 Fix For: 0.89-fb, 0.98.0

 Attachments: TestScanRetries.java, scan.diff


 The HTable client cannot retry a scan operation in the 
 getRegionServerWithRetries code path.
 This will result in the client missing data. This can be worked around using 
 hbase.client.retries.number to 1.
 The whole problem is that Callable knows nothing about retries and the 
 protocol it dances to as well doesn't support retires.
 This fix will keep Callable protocol (ugly thing worth merciless refactoring) 
 intact but will change
 ScannerCallable to anticipate retries. What we want is to make failed 
 operations to be identities for outside world:
 N1 , N2 , F3 , N3 , F4 , F4 , N4 ... = N1 , N2 , N3 , N4 ...
 where Nk are successful operation and Fk are failed operations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9915) Severe performance bug: isSeeked() in EncodedScannerV2 is always false

2013-11-07 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-9915:
-

Description: 
While debugging why reseek is so slow I found that it is quite broken for 
encoded scanners.
The problem is this:
AbstractScannerV2.reseekTo(...) calls isSeeked() to check whether scanner was 
seeked or not. If it was it checks whether the KV we want to seek to is in the 
current block, if not it always consults the index blocks again.
isSeeked checks the blockBuffer member, which is not used by EncodedScannerV2 
and thus always returns false, which in turns causes an index lookup for each 
reseek.

 Severe performance bug: isSeeked() in EncodedScannerV2 is always false
 --

 Key: HBASE-9915
 URL: https://issues.apache.org/jira/browse/HBASE-9915
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: profile.png


 While debugging why reseek is so slow I found that it is quite broken for 
 encoded scanners.
 The problem is this:
 AbstractScannerV2.reseekTo(...) calls isSeeked() to check whether scanner was 
 seeked or not. If it was it checks whether the KV we want to seek to is in 
 the current block, if not it always consults the index blocks again.
 isSeeked checks the blockBuffer member, which is not used by EncodedScannerV2 
 and thus always returns false, which in turns causes an index lookup for each 
 reseek.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HBASE-9915) Severe performance bug: isSeeked() in EncodedScannerV2 is always false

2013-11-07 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl reassigned HBASE-9915:


Assignee: Lars Hofhansl

 Severe performance bug: isSeeked() in EncodedScannerV2 is always false
 --

 Key: HBASE-9915
 URL: https://issues.apache.org/jira/browse/HBASE-9915
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: profile.png


 While debugging why reseek is so slow I found that it is quite broken for 
 encoded scanners.
 The problem is this:
 AbstractScannerV2.reseekTo(...) calls isSeeked() to check whether scanner was 
 seeked or not. If it was it checks whether the KV we want to seek to is in 
 the current block, if not it always consults the index blocks again.
 isSeeked checks the blockBuffer member, which is not used by EncodedScannerV2 
 and thus always returns false, which in turns causes an index lookup for each 
 reseek.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9915) Severe performance bug: isSeeked() in EncodedScannerV2 is always false

2013-11-07 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-9915:
-

Attachment: profile.png

Here's a profiler trace of this issue.

 Severe performance bug: isSeeked() in EncodedScannerV2 is always false
 --

 Key: HBASE-9915
 URL: https://issues.apache.org/jira/browse/HBASE-9915
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: profile.png


 While debugging why reseek is so slow I found that it is quite broken for 
 encoded scanners.
 The problem is this:
 AbstractScannerV2.reseekTo(...) calls isSeeked() to check whether scanner was 
 seeked or not. If it was it checks whether the KV we want to seek to is in 
 the current block, if not it always consults the index blocks again.
 isSeeked checks the blockBuffer member, which is not used by EncodedScannerV2 
 and thus always returns false, which in turns causes an index lookup for each 
 reseek.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9908) [WINDOWS] Fix filesystem / classloader related unit tests

2013-11-07 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816310#comment-13816310
 ] 

Nick Dimiduk commented on HBASE-9908:
-

Adding get and setConfiguration, any value in extending Configurable?

All green lights in the Windows host? Do we have Windows jenkins slaves yet?

Looks good to me. +1.

 [WINDOWS] Fix filesystem / classloader related unit tests
 -

 Key: HBASE-9908
 URL: https://issues.apache.org/jira/browse/HBASE-9908
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.98.0, 0.96.1

 Attachments: hbase-9908_v1.patch


 Some of the unit tests related to classloasing and filesystem are failing on 
 windows. 
 {code}
 org.apache.hadoop.hbase.coprocessor.TestClassLoading.testHBase3810
 org.apache.hadoop.hbase.coprocessor.TestClassLoading.testClassLoadingFromLocalFS
 org.apache.hadoop.hbase.coprocessor.TestClassLoading.testPrivateClassLoader
 org.apache.hadoop.hbase.coprocessor.TestClassLoading.testClassLoadingFromRelativeLibDirInJar
 org.apache.hadoop.hbase.coprocessor.TestClassLoading.testClassLoadingFromLibDirInJar
 org.apache.hadoop.hbase.coprocessor.TestClassLoading.testClassLoadingFromHDFS
 org.apache.hadoop.hbase.backup.TestHFileArchiving.testCleaningRace
 org.apache.hadoop.hbase.regionserver.wal.TestDurability.testDurability
 org.apache.hadoop.hbase.regionserver.wal.TestHLog.testMaintainOrderWithConcurrentWrites
 org.apache.hadoop.hbase.security.access.TestAccessController.testBulkLoad
 org.apache.hadoop.hbase.regionserver.TestHRegion.testRecoveredEditsReplayCompaction
 org.apache.hadoop.hbase.regionserver.TestHRegionBusyWait.testRecoveredEditsReplayCompaction
 org.apache.hadoop.hbase.util.TestFSUtils.testRenameAndSetModifyTime
 {code}
 The root causes are: 
  - Using local file name for referring to hdfs paths (HBASE-6830)
  - Classloader using the wrong file system 
  - StoreFile readers not being closed (for unfinished compaction)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9915) Severe performance bug: isSeeked() in EncodedScannerV2 is always false

2013-11-07 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-9915:
-

Attachment: 9915-0.94.txt

Here's a 0.94 patch that fixes this for me.
Only done minimal testing with this.

Speeds up some Phoenix queries (which used block encoding by default) by almost 
50% ([~giacomotaylor])

 Severe performance bug: isSeeked() in EncodedScannerV2 is always false
 --

 Key: HBASE-9915
 URL: https://issues.apache.org/jira/browse/HBASE-9915
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 9915-0.94.txt, profile.png


 While debugging why reseek is so slow I found that it is quite broken for 
 encoded scanners.
 The problem is this:
 AbstractScannerV2.reseekTo(...) calls isSeeked() to check whether scanner was 
 seeked or not. If it was it checks whether the KV we want to seek to is in 
 the current block, if not it always consults the index blocks again.
 isSeeked checks the blockBuffer member, which is not used by EncodedScannerV2 
 and thus always returns false, which in turns causes an index lookup for each 
 reseek.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (HBASE-9915) Severe performance bug: isSeeked() in EncodedScannerV2 is always false

2013-11-07 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816311#comment-13816311
 ] 

Lars Hofhansl edited comment on HBASE-9915 at 11/7/13 7:28 PM:
---

Here's a 0.94 patch that fixes this for me.
Only done minimal testing with this.

Speeds up some Phoenix queries (which used block encoding by default) by almost 
50% ([~giacomotaylor], FYI)


was (Author: lhofhansl):
Here's a 0.94 patch that fixes this for me.
Only done minimal testing with this.

Speeds up some Phoenix queries (which used block encoding by default) by almost 
50% ([~giacomotaylor])

 Severe performance bug: isSeeked() in EncodedScannerV2 is always false
 --

 Key: HBASE-9915
 URL: https://issues.apache.org/jira/browse/HBASE-9915
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 9915-0.94.txt, profile.png


 While debugging why reseek is so slow I found that it is quite broken for 
 encoded scanners.
 The problem is this:
 AbstractScannerV2.reseekTo(...) calls isSeeked() to check whether scanner was 
 seeked or not. If it was it checks whether the KV we want to seek to is in 
 the current block, if not it always consults the index blocks again.
 isSeeked checks the blockBuffer member, which is not used by EncodedScannerV2 
 and thus always returns false, which in turns causes an index lookup for each 
 reseek.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (HBASE-9904) Solve skipping data in HTable scans

2013-11-07 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816306#comment-13816306
 ] 

Jimmy Xiang edited comment on HBASE-9904 at 11/7/13 7:31 PM:
-

I attached the revised TestScanRetries.java that works with trunk, just changed 
it to fit with the new interfaces.  BTW, missed that: need to set caching/batch 
to 1 at line 133 since there are only 6 rows:
{code}
scan.setBatch(1);
scan.setCaching(1);
{code}


was (Author: jxiang):
I attached the revised TestScanRetries.java that works with trunk, just changed 
it to fit with the new interfaces.

 Solve skipping data in HTable scans
 ---

 Key: HBASE-9904
 URL: https://issues.apache.org/jira/browse/HBASE-9904
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.89-fb
Reporter: Manukranth Kolloju
Priority: Critical
 Fix For: 0.89-fb, 0.98.0

 Attachments: TestScanRetries.java, scan.diff


 The HTable client cannot retry a scan operation in the 
 getRegionServerWithRetries code path.
 This will result in the client missing data. This can be worked around using 
 hbase.client.retries.number to 1.
 The whole problem is that Callable knows nothing about retries and the 
 protocol it dances to as well doesn't support retires.
 This fix will keep Callable protocol (ugly thing worth merciless refactoring) 
 intact but will change
 ScannerCallable to anticipate retries. What we want is to make failed 
 operations to be identities for outside world:
 N1 , N2 , F3 , N3 , F4 , F4 , N4 ... = N1 , N2 , N3 , N4 ...
 where Nk are successful operation and Fk are failed operations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9892) Add info port to ServerName to support multi instances in a node

2013-11-07 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816320#comment-13816320
 ] 

Enis Soztutar commented on HBASE-9892:
--

bq. Since HServerLoad have verison field, I think we can add a info port field 
and keep the compatibility.
The problem with the versioned writables is that, on the serializing front, you 
have to know the version of the deserializing side. For example, in a rolling 
restart scenario, even if you version the HServerLoad, the master will throw 
exceptions if it has not updated yet because it will not know about the new 
version number. 
bq. For trunk, removal of HBASE-7027 and replacing it with a pb'd and trunk 
version of the attached patch would be best going forward.
Agreed. Let's forward port this patch to trunk and go with it. We have to PB 
the RegionServerInfo object in the RB patch. Undoing HBASE-7027 can be done in 
a subtask after this is committed. 

 Add info port to ServerName to support multi instances in a node
 

 Key: HBASE-9892
 URL: https://issues.apache.org/jira/browse/HBASE-9892
 Project: HBase
  Issue Type: Improvement
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Minor
 Attachments: HBASE-9892-0.94-v1.diff, HBASE-9892-0.94-v2.diff, 
 HBASE-9892-0.94-v3.diff


 The full GC time of  regionserver with big heap( 30G ) usually  can not be 
 controlled in 30s. At the same time, the servers with 64G memory are normal. 
 So we try to deploy multi rs instances(2-3 ) in a single node and the heap of 
 each rs is about 20G ~ 24G.
 Most of the things works fine, except the hbase web ui. The master get the RS 
 info port from conf, which is suitable for this situation of multi rs  
 instances in a node. So we add info port to ServerName.
 a. at the startup, rs report it's info port to Hmaster.
 b, For root region, rs write the servername with info port ro the zookeeper 
 root-region-server node.
 c, For meta regions, rs write the servername with info port to root region 
 d. For user regions,  rs write the servername with info port to meta regions 
 So hmaster and client can get info port from the servername.
 To test this feature, I change the rs num from 1 to 3 in standalone mode, so 
 we can test it in standalone mode,
 I think Hoya(hbase on yarn) will encounter the same problem.  Anyone knows 
 how Hoya handle this problem?
 PS: There are  different formats for servername in zk node and meta table, i 
 think we need to unify it and refactor the code.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9915) Severe performance bug: isSeeked() in EncodedScannerV2 is always false

2013-11-07 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816323#comment-13816323
 ] 

James Taylor commented on HBASE-9915:
-

+1. Fantastic, [~lhofhansl]! Great work.

 Severe performance bug: isSeeked() in EncodedScannerV2 is always false
 --

 Key: HBASE-9915
 URL: https://issues.apache.org/jira/browse/HBASE-9915
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 9915-0.94.txt, profile.png


 While debugging why reseek is so slow I found that it is quite broken for 
 encoded scanners.
 The problem is this:
 AbstractScannerV2.reseekTo(...) calls isSeeked() to check whether scanner was 
 seeked or not. If it was it checks whether the KV we want to seek to is in 
 the current block, if not it always consults the index blocks again.
 isSeeked checks the blockBuffer member, which is not used by EncodedScannerV2 
 and thus always returns false, which in turns causes an index lookup for each 
 reseek.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9915) Severe performance bug: isSeeked() in EncodedScannerV2 is always false

2013-11-07 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816329#comment-13816329
 ] 

Jean-Marc Spaggiari commented on HBASE-9915:


Have you compared the performances with and without? I have a fresh 0.94 perf 
test running, I can try to apply this and re-run to see the difference...

 Severe performance bug: isSeeked() in EncodedScannerV2 is always false
 --

 Key: HBASE-9915
 URL: https://issues.apache.org/jira/browse/HBASE-9915
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 9915-0.94.txt, profile.png


 While debugging why reseek is so slow I found that it is quite broken for 
 encoded scanners.
 The problem is this:
 AbstractScannerV2.reseekTo(...) calls isSeeked() to check whether scanner was 
 seeked or not. If it was it checks whether the KV we want to seek to is in 
 the current block, if not it always consults the index blocks again.
 isSeeked checks the blockBuffer member, which is not used by EncodedScannerV2 
 and thus always returns false, which in turns causes an index lookup for each 
 reseek.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9908) [WINDOWS] Fix filesystem / classloader related unit tests

2013-11-07 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816330#comment-13816330
 ] 

Enis Soztutar commented on HBASE-9908:
--

Thanks Nick for taking a look. 
bq. Adding get and setConfiguration, any value in extending Configurable?
Agreed that we can do it. These method sigs are just moved one level up to 
HBaseCommonTestingUtil. If we extend Configured, the method sigs will change as 
well, which will make he patch much bigger. 
bq. All green lights in the Windows host? 
For the fixed test cases, yes. 
bq. Do we have Windows jenkins slaves yet?
Not yet. See HBASE-6819. There is some server donations happening, but not 
fully finished. 
bq. Looks good to me. +1.
Will commit this by EOD to trunk and 0.96 unless objection. 

 [WINDOWS] Fix filesystem / classloader related unit tests
 -

 Key: HBASE-9908
 URL: https://issues.apache.org/jira/browse/HBASE-9908
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.98.0, 0.96.1

 Attachments: hbase-9908_v1.patch


 Some of the unit tests related to classloasing and filesystem are failing on 
 windows. 
 {code}
 org.apache.hadoop.hbase.coprocessor.TestClassLoading.testHBase3810
 org.apache.hadoop.hbase.coprocessor.TestClassLoading.testClassLoadingFromLocalFS
 org.apache.hadoop.hbase.coprocessor.TestClassLoading.testPrivateClassLoader
 org.apache.hadoop.hbase.coprocessor.TestClassLoading.testClassLoadingFromRelativeLibDirInJar
 org.apache.hadoop.hbase.coprocessor.TestClassLoading.testClassLoadingFromLibDirInJar
 org.apache.hadoop.hbase.coprocessor.TestClassLoading.testClassLoadingFromHDFS
 org.apache.hadoop.hbase.backup.TestHFileArchiving.testCleaningRace
 org.apache.hadoop.hbase.regionserver.wal.TestDurability.testDurability
 org.apache.hadoop.hbase.regionserver.wal.TestHLog.testMaintainOrderWithConcurrentWrites
 org.apache.hadoop.hbase.security.access.TestAccessController.testBulkLoad
 org.apache.hadoop.hbase.regionserver.TestHRegion.testRecoveredEditsReplayCompaction
 org.apache.hadoop.hbase.regionserver.TestHRegionBusyWait.testRecoveredEditsReplayCompaction
 org.apache.hadoop.hbase.util.TestFSUtils.testRenameAndSetModifyTime
 {code}
 The root causes are: 
  - Using local file name for referring to hdfs paths (HBASE-6830)
  - Classloader using the wrong file system 
  - StoreFile readers not being closed (for unfinished compaction)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9915) Severe performance bug: isSeeked() in EncodedScannerV2 is always false

2013-11-07 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816335#comment-13816335
 ] 

Ted Yu commented on HBASE-9915:
---

Nice finding.

+1

 Severe performance bug: isSeeked() in EncodedScannerV2 is always false
 --

 Key: HBASE-9915
 URL: https://issues.apache.org/jira/browse/HBASE-9915
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 9915-0.94.txt, profile.png


 While debugging why reseek is so slow I found that it is quite broken for 
 encoded scanners.
 The problem is this:
 AbstractScannerV2.reseekTo(...) calls isSeeked() to check whether scanner was 
 seeked or not. If it was it checks whether the KV we want to seek to is in 
 the current block, if not it always consults the index blocks again.
 isSeeked checks the blockBuffer member, which is not used by EncodedScannerV2 
 and thus always returns false, which in turns causes an index lookup for each 
 reseek.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9915) Severe performance bug: isSeeked() in EncodedScannerV2 is always false

2013-11-07 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated HBASE-9915:


Tags: Phoenix

 Severe performance bug: isSeeked() in EncodedScannerV2 is always false
 --

 Key: HBASE-9915
 URL: https://issues.apache.org/jira/browse/HBASE-9915
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 9915-0.94.txt, profile.png


 While debugging why reseek is so slow I found that it is quite broken for 
 encoded scanners.
 The problem is this:
 AbstractScannerV2.reseekTo(...) calls isSeeked() to check whether scanner was 
 seeked or not. If it was it checks whether the KV we want to seek to is in 
 the current block, if not it always consults the index blocks again.
 isSeeked checks the blockBuffer member, which is not used by EncodedScannerV2 
 and thus always returns false, which in turns causes an index lookup for each 
 reseek.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9915) Severe performance bug: isSeeked() in EncodedScannerV2 is always false

2013-11-07 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-9915:
-

Attachment: 9915-0.94-v2.txt

Actually this is slightly better.

 Severe performance bug: isSeeked() in EncodedScannerV2 is always false
 --

 Key: HBASE-9915
 URL: https://issues.apache.org/jira/browse/HBASE-9915
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 9915-0.94-v2.txt, 9915-0.94.txt, profile.png


 While debugging why reseek is so slow I found that it is quite broken for 
 encoded scanners.
 The problem is this:
 AbstractScannerV2.reseekTo(...) calls isSeeked() to check whether scanner was 
 seeked or not. If it was it checks whether the KV we want to seek to is in 
 the current block, if not it always consults the index blocks again.
 isSeeked checks the blockBuffer member, which is not used by EncodedScannerV2 
 and thus always returns false, which in turns causes an index lookup for each 
 reseek.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9915) Severe performance bug: isSeeked() in EncodedScannerV2 is always false

2013-11-07 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816348#comment-13816348
 ] 

Jimmy Xiang commented on HBASE-9915:


Great. +1.

There is another performance related issue with encoded scanner.  In 
DataBlockEncoding#isCorrectEncoder, it checks the wrong class.  So it ends up 
always return false, a new encoder is created for each encoded data block. I 
will fix this in HBASE-9870.

 Severe performance bug: isSeeked() in EncodedScannerV2 is always false
 --

 Key: HBASE-9915
 URL: https://issues.apache.org/jira/browse/HBASE-9915
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 9915-0.94-v2.txt, 9915-0.94.txt, profile.png


 While debugging why reseek is so slow I found that it is quite broken for 
 encoded scanners.
 The problem is this:
 AbstractScannerV2.reseekTo(...) calls isSeeked() to check whether scanner was 
 seeked or not. If it was it checks whether the KV we want to seek to is in 
 the current block, if not it always consults the index blocks again.
 isSeeked checks the blockBuffer member, which is not used by EncodedScannerV2 
 and thus always returns false, which in turns causes an index lookup for each 
 reseek.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9915) Severe performance bug: isSeeked() in EncodedScannerV2 is always false

2013-11-07 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816350#comment-13816350
 ] 

Lars Hofhansl commented on HBASE-9915:
--

[~jmspaggi] Yeah, did some count queries in Phoenix. Without patch they took 
27s with the patch they take 14s. You'll see the improvement if (1) you use a 
block encoder (like FAST_DIFF, etc) and (2) you added some columns to you scan 
object (so that the ExplicitColumnTracker is used under the hood). I am not 
sure any the the performance evaluation tests do that (and if not, we should 
probably add that).

Making a trunk patch now for a full test run.

 Severe performance bug: isSeeked() in EncodedScannerV2 is always false
 --

 Key: HBASE-9915
 URL: https://issues.apache.org/jira/browse/HBASE-9915
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 9915-0.94-v2.txt, 9915-0.94.txt, profile.png


 While debugging why reseek is so slow I found that it is quite broken for 
 encoded scanners.
 The problem is this:
 AbstractScannerV2.reseekTo(...) calls isSeeked() to check whether scanner was 
 seeked or not. If it was it checks whether the KV we want to seek to is in 
 the current block, if not it always consults the index blocks again.
 isSeeked checks the blockBuffer member, which is not used by EncodedScannerV2 
 and thus always returns false, which in turns causes an index lookup for each 
 reseek.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9915) Severe performance bug: isSeeked() in EncodedScannerV2 is always false

2013-11-07 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816351#comment-13816351
 ] 

Jean-Marc Spaggiari commented on HBASE-9915:


Is it possible to add some comments for isSeeked ? Like, what it's based on, 
and why, etc.?

 Severe performance bug: isSeeked() in EncodedScannerV2 is always false
 --

 Key: HBASE-9915
 URL: https://issues.apache.org/jira/browse/HBASE-9915
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 9915-0.94-v2.txt, 9915-0.94.txt, profile.png


 While debugging why reseek is so slow I found that it is quite broken for 
 encoded scanners.
 The problem is this:
 AbstractScannerV2.reseekTo(...) calls isSeeked() to check whether scanner was 
 seeked or not. If it was it checks whether the KV we want to seek to is in 
 the current block, if not it always consults the index blocks again.
 isSeeked checks the blockBuffer member, which is not used by EncodedScannerV2 
 and thus always returns false, which in turns causes an index lookup for each 
 reseek.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9915) Severe performance bug: isSeeked() in EncodedScannerV2 is always false

2013-11-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816352#comment-13816352
 ] 

stack commented on HBASE-9915:
--

Good one [~lhofhansl].  You ain't just a pretty face.

 Severe performance bug: isSeeked() in EncodedScannerV2 is always false
 --

 Key: HBASE-9915
 URL: https://issues.apache.org/jira/browse/HBASE-9915
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 9915-0.94-v2.txt, 9915-0.94.txt, profile.png


 While debugging why reseek is so slow I found that it is quite broken for 
 encoded scanners.
 The problem is this:
 AbstractScannerV2.reseekTo(...) calls isSeeked() to check whether scanner was 
 seeked or not. If it was it checks whether the KV we want to seek to is in 
 the current block, if not it always consults the index blocks again.
 isSeeked checks the blockBuffer member, which is not used by EncodedScannerV2 
 and thus always returns false, which in turns causes an index lookup for each 
 reseek.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9915) Severe performance bug: isSeeked() in EncodedScannerV2 is always false

2013-11-07 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-9915:
-

Attachment: 9915-trunk.txt

And a trunk patch (also added a comment request by JM)

 Severe performance bug: isSeeked() in EncodedScannerV2 is always false
 --

 Key: HBASE-9915
 URL: https://issues.apache.org/jira/browse/HBASE-9915
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 9915-0.94-v2.txt, 9915-0.94.txt, 9915-trunk.txt, 
 profile.png


 While debugging why reseek is so slow I found that it is quite broken for 
 encoded scanners.
 The problem is this:
 AbstractScannerV2.reseekTo(...) calls isSeeked() to check whether scanner was 
 seeked or not. If it was it checks whether the KV we want to seek to is in 
 the current block, if not it always consults the index blocks again.
 isSeeked checks the blockBuffer member, which is not used by EncodedScannerV2 
 and thus always returns false, which in turns causes an index lookup for each 
 reseek.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9915) Severe performance bug: isSeeked() in EncodedScannerV2 is always false

2013-11-07 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-9915:
-

Status: Patch Available  (was: Open)

 Severe performance bug: isSeeked() in EncodedScannerV2 is always false
 --

 Key: HBASE-9915
 URL: https://issues.apache.org/jira/browse/HBASE-9915
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 9915-0.94-v2.txt, 9915-0.94.txt, 9915-trunk.txt, 
 profile.png


 While debugging why reseek is so slow I found that it is quite broken for 
 encoded scanners.
 The problem is this:
 AbstractScannerV2.reseekTo(...) calls isSeeked() to check whether scanner was 
 seeked or not. If it was it checks whether the KV we want to seek to is in 
 the current block, if not it always consults the index blocks again.
 isSeeked checks the blockBuffer member, which is not used by EncodedScannerV2 
 and thus always returns false, which in turns causes an index lookup for each 
 reseek.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-3787) Increment is non-idempotent but client retries RPC

2013-11-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816366#comment-13816366
 ] 

stack commented on HBASE-3787:
--

[~sershe] Is latest on RB boss?

 Increment is non-idempotent but client retries RPC
 --

 Key: HBASE-3787
 URL: https://issues.apache.org/jira/browse/HBASE-3787
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.94.4, 0.95.2
Reporter: dhruba borthakur
Assignee: Sergey Shelukhin
Priority: Blocker
 Attachments: HBASE-3787-partial.patch, HBASE-3787-v0.patch, 
 HBASE-3787-v1.patch, HBASE-3787-v2.patch, HBASE-3787-v3.patch, 
 HBASE-3787-v4.patch, HBASE-3787-v5.patch, HBASE-3787-v5.patch, 
 HBASE-3787-v6.patch, HBASE-3787-v7.patch, HBASE-3787-v8.patch


 The HTable.increment() operation is non-idempotent. The client retries the 
 increment RPC a few times (as specified by configuration) before throwing an 
 error to the application. This makes it possible that the same increment call 
 be applied twice at the server.
 For increment operations, is it better to use 
 HConnectionManager.getRegionServerWithoutRetries()? Another  option would be 
 to enhance the IPC module to make the RPC server correctly identify if the 
 RPC is a retry attempt and handle accordingly.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9885) Avoid some Result creation in protobuf conversions

2013-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816373#comment-13816373
 ] 

Hudson commented on HBASE-9885:
---

FAILURE: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #830 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/830/])
HBASE-9885 Avoid some Result creation in protobuf conversions (nkeywal: rev 
1539692)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java


 Avoid some Result creation in protobuf conversions
 --

 Key: HBASE-9885
 URL: https://issues.apache.org/jira/browse/HBASE-9885
 Project: HBase
  Issue Type: Bug
  Components: Client, Protobufs, regionserver
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.96.1

 Attachments: 9885.v1.patch, 9885.v2, 9885.v2.patch, 9885.v3.patch, 
 9885.v3.patch, 9885.v4.patch


 We creates a lot of Result that we could avoid, as they contain nothing else 
 than a boolean value. We create sometimes a protobuf builder as well on this 
 path, this can be avoided.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9003) TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar

2013-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816374#comment-13816374
 ] 

Hudson commented on HBASE-9003:
---

FAILURE: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #830 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/830/])
HBASE-9003 Remove the jamon generated classes from the findbugs analysis 
(nkeywal: rev 1539599)
* /hbase/trunk/dev-support/findbugs-exclude.xml
* /hbase/trunk/dev-support/test-patch.properties


 TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar
 -

 Key: HBASE-9003
 URL: https://issues.apache.org/jira/browse/HBASE-9003
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.92.2, 0.95.1, 0.94.9
Reporter: Esteban Gutierrez

 This is the problem: {{TableMapReduceUtil#addDependencyJars}} relies on 
 {{org.apache.hadoop.util.JarFinder}} if available to call {{getJar()}}. 
 However {{getJar()}} uses File.createTempFile() to create a temporary file 
 under {{hadoop.tmp.dir}}{{/target/test-dir}}. Due HADOOP-9737 the created jar 
 and its content is not purged after the JVM is destroyed. Since most 
 configurations point {{hadoop.tmp.dir}} under {{/tmp}} the generated jar 
 files get purged by {{tmpwatch}} or a similar tool, but boxes that have 
 {{hadoop.tmp.dir}} pointing to a different location not monitored by 
 {{tmpwatch}} will pile up a collection of jars causing all kind of issues. 
 Since {{JarFinder#getJar}} is not a public API from Hadoop (see [~tucu00] 
 comment on HADOOP-9737) we shouldn't use that as part of 
 {{TableMapReduceUtil}} in order to avoid this kind of issues.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9907) Rig to fake a cluster so can profile client behaviors

2013-11-07 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816375#comment-13816375
 ] 

Nick Dimiduk commented on HBASE-9907:
-

Sorry mate, v2 is no good.

 Rig to fake a cluster so can profile client behaviors
 -

 Key: HBASE-9907
 URL: https://issues.apache.org/jira/browse/HBASE-9907
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Affects Versions: 0.96.0
Reporter: stack
Assignee: stack
 Fix For: 0.98.0, 0.96.1

 Attachments: 9907.txt, 9907.txt, 9907v2.txt


 Patch carried over from HBASE-9775 parent issue.  Adds to the 
 TestClientNoCluster#main a rig that allows faking many clients against a few 
 servers and the opposite.  Useful for studying client operation.
 Includes a few changes to pb makings to try and save on a few creations.
 Also has an edit of javadoc on how to create an HConnection and HTable trying 
 to be more forceful about pointing you in right direction ([~lhofhansl] -- 
 mind reviewing these javadoc changes?)
 I have a +1 already on this patch up in parent issue.  Will run by hadoopqa 
 to make sure all good before commit.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9792) Region states should update last assignments when a region is opened.

2013-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816371#comment-13816371
 ] 

Hudson commented on HBASE-9792:
---

FAILURE: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #830 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/830/])
HBASE-9792 Region states should update last assignments when a region is opened 
(jxiang: rev 1539728)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManager.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManagerOnCluster.java


 Region states should update last assignments when a region is opened.
 -

 Key: HBASE-9792
 URL: https://issues.apache.org/jira/browse/HBASE-9792
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.98.0, 0.96.1

 Attachments: trunk-9792.patch, trunk-9792_v2.patch, 
 trunk-9792_v3.1.patch, trunk-9792_v3.patch


 Currently, we update a region's last assignment region server when the region 
 is online.  We should do this sooner, when the region is moved to OPEN state. 
  CM could kill this region server before we delete the znode and online the 
 region.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-8741) Scope sequenceid to the region rather than regionserver (WAS: Mutations on Regions in recovery mode might have same sequenceIDs)

2013-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816372#comment-13816372
 ] 

Hudson commented on HBASE-8741:
---

FAILURE: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #830 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/830/])
HBASE-8741 Scope sequenceid to the region rather than regionserver (WAS: 
Mutations on Regions in recovery mode might have same sequenceIDs) (stack: rev 
1539743)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLog.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogUtil.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestWALObserver.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHLogRecordReader.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/HLogPerformanceEvaluation.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestHLog.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestHLogSplit.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollingNoCluster.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALActionsListener.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplay.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationHLogReaderManager.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManager.java


 Scope sequenceid to the region rather than regionserver (WAS: Mutations on 
 Regions in recovery mode might have same sequenceIDs)
 

 Key: HBASE-8741
 URL: https://issues.apache.org/jira/browse/HBASE-8741
 Project: HBase
  Issue Type: Bug
  Components: MTTR
Affects Versions: 0.95.1
Reporter: Himanshu Vashishtha
Assignee: Himanshu Vashishtha
 Fix For: 0.98.0

 Attachments: HBASE-8741-trunk-v6.1-rebased.patch, 
 HBASE-8741-trunk-v6.2.1.patch, HBASE-8741-trunk-v6.2.2.patch, 
 HBASE-8741-trunk-v6.2.2.patch, HBASE-8741-trunk-v6.3.patch, 
 HBASE-8741-trunk-v6.4.patch, HBASE-8741-trunk-v6.patch, HBASE-8741-v0.patch, 
 HBASE-8741-v2.patch, HBASE-8741-v3.patch, HBASE-8741-v4-again.patch, 
 HBASE-8741-v4-again.patch, HBASE-8741-v4.patch, HBASE-8741-v5-again.patch, 
 HBASE-8741-v5.patch


 Currently, when opening a region, we find the maximum sequence ID from all 
 its HFiles and then set the LogSequenceId of the log (in case the later is at 
 a small value). This works good in recovered.edits case as we are not writing 
 to the region until we have replayed all of its previous edits. 
 With distributed log replay, if we want to enable writes while a region is 
 under recovery, we need to make sure that the logSequenceId  maximum 
 logSequenceId of the old regionserver. Otherwise, we might have a situation 
 where new edits have same (or smaller) sequenceIds. 
 We can store region level information in the WALTrailer, than this scenario 
 could be avoided by:
 a) reading the trailer of the last completed file, i.e., last wal file 
 which has a trailer and,
 b) completely reading the last wal file (this file would not have the 
 trailer, so it needs to be read completely).
 In future, if we switch to multi wal file, we could read the trailer for all 
 completed WAL files, and reading the remaining incomplete files.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9808) org.apache.hadoop.hbase.rest.PerformanceEvaluation is out of sync with org.apache.hadoop.hbase.PerformanceEvaluation

2013-11-07 Thread Gustavo Anatoly (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gustavo Anatoly updated HBASE-9808:
---

Attachment: HBASE-9808-v3.patch

Omitted line {code}scan.setCaching(100);{code} on version 3.

Thanks for review.

 org.apache.hadoop.hbase.rest.PerformanceEvaluation is out of sync with 
 org.apache.hadoop.hbase.PerformanceEvaluation
 

 Key: HBASE-9808
 URL: https://issues.apache.org/jira/browse/HBASE-9808
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Gustavo Anatoly
 Attachments: HBASE-9808-v1.patch, HBASE-9808-v2.patch, 
 HBASE-9808-v3.patch, HBASE-9808.patch


 Here is list of JIRAs whose fixes might have gone into 
 rest.PerformanceEvaluation :
 {code}
 
 r1527817 | mbertozzi | 2013-09-30 15:57:44 -0700 (Mon, 30 Sep 2013) | 1 line
 HBASE-9663 PerformanceEvaluation does not properly honor specified table name 
 parameter
 
 r1526452 | mbertozzi | 2013-09-26 04:58:50 -0700 (Thu, 26 Sep 2013) | 1 line
 HBASE-9662 PerformanceEvaluation input do not handle tags properties
 
 r1525269 | ramkrishna | 2013-09-21 11:01:32 -0700 (Sat, 21 Sep 2013) | 3 lines
 HBASE-8496 - Implement tags and the internals of how a tag should look like 
 (Ram)
 
 r1524985 | nkeywal | 2013-09-20 06:02:54 -0700 (Fri, 20 Sep 2013) | 1 line
 HBASE-9558  PerformanceEvaluation is in hbase-server, and creates a 
 dependency to MiniDFSCluster
 
 r1523782 | nkeywal | 2013-09-16 13:07:13 -0700 (Mon, 16 Sep 2013) | 1 line
 HBASE-9521  clean clearBufferOnFail behavior and deprecate it
 
 r1518341 | jdcryans | 2013-08-28 12:46:55 -0700 (Wed, 28 Aug 2013) | 2 lines
 HBASE-9330 Refactor PE to create HTable the correct way
 {code}
 Long term, we may consider consolidating the two PerformanceEvaluation 
 classes so that such maintenance work can be reduced.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9047) Tool to handle finishing replication when the cluster is offline

2013-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816377#comment-13816377
 ] 

Hadoop QA commented on HBASE-9047:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12612659/HBASE-9047-trunk-v4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7779//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7779//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7779//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7779//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7779//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7779//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7779//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7779//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7779//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7779//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7779//console

This message is automatically generated.

 Tool to handle finishing replication when the cluster is offline
 

 Key: HBASE-9047
 URL: https://issues.apache.org/jira/browse/HBASE-9047
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.96.0
Reporter: Jean-Daniel Cryans
Assignee: Demai Ni
 Fix For: 0.98.0

 Attachments: HBASE-9047-0.94.9-v0.PATCH, HBASE-9047-trunk-v0.patch, 
 HBASE-9047-trunk-v1.patch, HBASE-9047-trunk-v2.patch, 
 HBASE-9047-trunk-v3.patch, HBASE-9047-trunk-v4.patch, 
 HBASE-9047-trunk-v4.patch


 We're having a discussion on the mailing list about replicating the data on a 
 cluster that was shut down in an offline fashion. The motivation could be 
 that you don't want to bring HBase back up but still need that data on the 
 slave.
 So I have this idea of a tool that would be running on the master cluster 
 while it is down, although it could also run at any time. Basically it would 
 be able to read the replication state of each master region server, finish 
 replicating what's missing to all the slave, and then clear that state in 
 zookeeper.
 The code that handles replication does most of that already, see 
 ReplicationSourceManager and ReplicationSource. Basically when 
 ReplicationSourceManager.init() is called, it will check all the queues in ZK 
 and try to grab those that aren't attached to a region server. If the whole 
 cluster is down, it will grab all of them.
 The beautiful thing here is that you could start that tool on all your 
 machines and the load will be spread out, but that might not be a big concern 
 if replication wasn't lagging since it would take a few seconds to finish 
 replicating the missing data for each region server.
 I'm guessing when starting ReplicationSourceManager you'd give it a fake 
 region 

[jira] [Updated] (HBASE-9907) Rig to fake a cluster so can profile client behaviors

2013-11-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-9907:
-

Attachment: 9907v3.txt

Retry

 Rig to fake a cluster so can profile client behaviors
 -

 Key: HBASE-9907
 URL: https://issues.apache.org/jira/browse/HBASE-9907
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Affects Versions: 0.96.0
Reporter: stack
Assignee: stack
 Fix For: 0.98.0, 0.96.1

 Attachments: 9907.txt, 9907.txt, 9907v2.txt, 9907v3.txt


 Patch carried over from HBASE-9775 parent issue.  Adds to the 
 TestClientNoCluster#main a rig that allows faking many clients against a few 
 servers and the opposite.  Useful for studying client operation.
 Includes a few changes to pb makings to try and save on a few creations.
 Also has an edit of javadoc on how to create an HConnection and HTable trying 
 to be more forceful about pointing you in right direction ([~lhofhansl] -- 
 mind reviewing these javadoc changes?)
 I have a +1 already on this patch up in parent issue.  Will run by hadoopqa 
 to make sure all good before commit.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9808) org.apache.hadoop.hbase.rest.PerformanceEvaluation is out of sync with org.apache.hadoop.hbase.PerformanceEvaluation

2013-11-07 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13816404#comment-13816404
 ] 

Ted Yu commented on HBASE-9808:
---

You can find out where whitespace was introduced by using 
https://reviews.apache.org
Create a new review request. Select 'hbase' for 'Repository:'
In 'Base Directory:', enter '/' (without quotes)
Select patch v3.

You would see a new whitespace on line 296.

 org.apache.hadoop.hbase.rest.PerformanceEvaluation is out of sync with 
 org.apache.hadoop.hbase.PerformanceEvaluation
 

 Key: HBASE-9808
 URL: https://issues.apache.org/jira/browse/HBASE-9808
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Gustavo Anatoly
 Attachments: HBASE-9808-v1.patch, HBASE-9808-v2.patch, 
 HBASE-9808-v3.patch, HBASE-9808.patch


 Here is list of JIRAs whose fixes might have gone into 
 rest.PerformanceEvaluation :
 {code}
 
 r1527817 | mbertozzi | 2013-09-30 15:57:44 -0700 (Mon, 30 Sep 2013) | 1 line
 HBASE-9663 PerformanceEvaluation does not properly honor specified table name 
 parameter
 
 r1526452 | mbertozzi | 2013-09-26 04:58:50 -0700 (Thu, 26 Sep 2013) | 1 line
 HBASE-9662 PerformanceEvaluation input do not handle tags properties
 
 r1525269 | ramkrishna | 2013-09-21 11:01:32 -0700 (Sat, 21 Sep 2013) | 3 lines
 HBASE-8496 - Implement tags and the internals of how a tag should look like 
 (Ram)
 
 r1524985 | nkeywal | 2013-09-20 06:02:54 -0700 (Fri, 20 Sep 2013) | 1 line
 HBASE-9558  PerformanceEvaluation is in hbase-server, and creates a 
 dependency to MiniDFSCluster
 
 r1523782 | nkeywal | 2013-09-16 13:07:13 -0700 (Mon, 16 Sep 2013) | 1 line
 HBASE-9521  clean clearBufferOnFail behavior and deprecate it
 
 r1518341 | jdcryans | 2013-08-28 12:46:55 -0700 (Wed, 28 Aug 2013) | 2 lines
 HBASE-9330 Refactor PE to create HTable the correct way
 {code}
 Long term, we may consider consolidating the two PerformanceEvaluation 
 classes so that such maintenance work can be reduced.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9856) Fix some findbugs Performance Warnings

2013-11-07 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9856:
--

Status: Open  (was: Patch Available)

 Fix some findbugs Performance Warnings
 --

 Key: HBASE-9856
 URL: https://issues.apache.org/jira/browse/HBASE-9856
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Trivial
 Fix For: 0.98.0

 Attachments: 9856-v1.txt


 These are the warnings to be fixed:
 {code}
 SIC Should org.apache.hadoop.hbase.regionserver.HRegion$RowLock be a _static_ 
 inner class?
 UPM Private method 
 org.apache.hadoop.hbase.security.access.AccessController.requirePermission(String,
  String, Permission$Action[]) is never called
 WMI Method 
 org.apache.hadoop.hbase.regionserver.wal.WALEditsReplaySink.replayEntries(List)
  makes inefficient use of keySet iterator instead of entrySet iterator
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


  1   2   3   >