[jira] [Commented] (HBASE-10615) Make LoadIncrementalHFiles skip reference files

2014-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13912622#comment-13912622
 ] 

Hadoop QA commented on HBASE-10615:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12631153/HBASE-10615-trunk.patch
  against trunk revision .
  ATTACHMENT ID: 12631153

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8810//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8810//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8810//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8810//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8810//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8810//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8810//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8810//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8810//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8810//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8810//console

This message is automatically generated.

 Make LoadIncrementalHFiles skip reference files
 ---

 Key: HBASE-10615
 URL: https://issues.apache.org/jira/browse/HBASE-10615
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.96.0
Reporter: Jerry He
Assignee: Jerry He
Priority: Minor
 Attachments: HBASE-10615-trunk.patch


 There is use base that the source of hfiles for LoadIncrementalHFiles can be 
 a FileSystem copy-out/backup of HBase table or archive hfiles.  For example,
 1. Copy-out of hbase.rootdir, table dir, region dir (after disable) or 
 archive dir.
 2. ExportSnapshot
 It is possible that there are reference files in the family dir in these 
 cases.
 We have such use cases, where trying to load back into HBase, we'll get
 {code}
 Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
 reading HFile Trailer from file 
 hdfs://HDFS-AMR/tmp/restoreTemp/117182adfe861c5d2b607da91d60aa8a/info/aed3d01648384b31b29e5bad4cd80bec.d179ab341fc68e7612fcd74eaf7cafbd
 at 
 org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570)
 at 
 org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:594)
 at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:636)
 at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.groupOrSplit(LoadIncrementalHFiles.java:472)
 at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$2.call(LoadIncrementalHFiles.java:393)
 at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$2.call(LoadIncrementalHFiles.java:391)
 

[jira] [Commented] (HBASE-9914) Port fix for HBASE-9836 'Intermittent TestRegionObserverScannerOpenHook#testRegionObserverCompactionTimeStacking failure' to 0.94

2014-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13912629#comment-13912629
 ] 

Hadoop QA commented on HBASE-9914:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12631154/HBASE-9914-0.94.17-v01.patch
  against trunk revision .
  ATTACHMENT ID: 12631154

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8811//console

This message is automatically generated.

 Port fix for HBASE-9836 'Intermittent 
 TestRegionObserverScannerOpenHook#testRegionObserverCompactionTimeStacking 
 failure' to 0.94
 -

 Key: HBASE-9914
 URL: https://issues.apache.org/jira/browse/HBASE-9914
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: takeshi.miao
  Labels: noob
 Attachments: HBASE-9914-0.94.17-v01.patch


 According to this thread: http://search-hadoop.com/m/3CzC31BQsDd , 
 TestRegionObserverScannerOpenHook#testRegionObserverCompactionTimeStacking 
 sometimes failed.
 This issue is to port the fix from HBASE-9836 to 0.94



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10614) Master could not be stopped

2014-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13912633#comment-13912633
 ] 

Hudson commented on HBASE-10614:


SUCCESS: Integrated in HBase-0.94-JDK7 #64 (See 
[https://builds.apache.org/job/HBase-0.94-JDK7/64/])
HBASE-10614 Master could not be stopped (Jingcheng Du) (stack: rev 1571918)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/catalog/MetaReader.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/master/handler/ServerShutdownHandler.java


 Master could not be stopped
 ---

 Key: HBASE-10614
 URL: https://issues.apache.org/jira/browse/HBASE-10614
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.16, 0.99.0
Reporter: Jingcheng Du
Assignee: Jingcheng Du
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-10614-0.94.patch, HBASE-10614.patch


  It's an issue when to run bin/hbase master stop to shutdown the cluster.
  This could be reproduced by the following steps. Particularly for the trunk 
 code, we need to configure the hbase.assignment.maximum.attempts as 1.
 1. Start one master and several region servers.
 2. Stop all the region servers.
 3. After a while, run bin/hbase master stop to shutdown the cluster.
  As a result, the master could not be stopped within a short time, but will 
 be stopped after several hours. And after it's stopped, i find the error logs.
 1. For the trunk:
   A. lots of the logs which are java.io.IOException: Failed to find 
 location, tableName=hbase:meta, row=, reload=true
   B..And at last, there's one exception before the master is stopped, 
 ServerShutdownHandler: Received exception accessing hbase:meta during server 
 shutdown of server-XXX, retrying hbase:meta read
 java.io.InterruptedIOException: Interrupted after 0 tries  on 350.
 2. For the branch 0.94: 
   A. lots of the logs which are Looked up root region location, 
 connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@44285d14;
  serverName=.
   B. And at last, there's one exception before the master is stopped, 
 ServerShutdownHandler: Received exception accessing META during server 
 shutdown of server-XXX, retrying META read
 org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to find 
 region for  after 140 tries.
  We could see the master are stopped after lots of reties which are not 
 necessary when the cluster is shutdown.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10614) Master could not be stopped

2014-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13912657#comment-13912657
 ] 

Hudson commented on HBASE-10614:


ABORTED: Integrated in HBase-0.94-on-Hadoop-2 #32 (See 
[https://builds.apache.org/job/HBase-0.94-on-Hadoop-2/32/])
HBASE-10614 Master could not be stopped (Jingcheng Du) (stack: rev 1571918)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/catalog/MetaReader.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/master/handler/ServerShutdownHandler.java


 Master could not be stopped
 ---

 Key: HBASE-10614
 URL: https://issues.apache.org/jira/browse/HBASE-10614
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.16, 0.99.0
Reporter: Jingcheng Du
Assignee: Jingcheng Du
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-10614-0.94.patch, HBASE-10614.patch


  It's an issue when to run bin/hbase master stop to shutdown the cluster.
  This could be reproduced by the following steps. Particularly for the trunk 
 code, we need to configure the hbase.assignment.maximum.attempts as 1.
 1. Start one master and several region servers.
 2. Stop all the region servers.
 3. After a while, run bin/hbase master stop to shutdown the cluster.
  As a result, the master could not be stopped within a short time, but will 
 be stopped after several hours. And after it's stopped, i find the error logs.
 1. For the trunk:
   A. lots of the logs which are java.io.IOException: Failed to find 
 location, tableName=hbase:meta, row=, reload=true
   B..And at last, there's one exception before the master is stopped, 
 ServerShutdownHandler: Received exception accessing hbase:meta during server 
 shutdown of server-XXX, retrying hbase:meta read
 java.io.InterruptedIOException: Interrupted after 0 tries  on 350.
 2. For the branch 0.94: 
   A. lots of the logs which are Looked up root region location, 
 connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@44285d14;
  serverName=.
   B. And at last, there's one exception before the master is stopped, 
 ServerShutdownHandler: Received exception accessing META during server 
 shutdown of server-XXX, retrying META read
 org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to find 
 region for  after 140 tries.
  We could see the master are stopped after lots of reties which are not 
 necessary when the cluster is shutdown.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10077) Per family WAL encryption

2014-02-26 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13912679#comment-13912679
 ] 

Anoop Sam John commented on HBASE-10077:


So one Mutation write will involve write to 2 WALs.  Atomicity for 2 WAL writes 
(?)

 Per family WAL encryption
 -

 Key: HBASE-10077
 URL: https://issues.apache.org/jira/browse/HBASE-10077
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.98.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell

 HBASE-7544 introduces WAL encryption to prevent the leakage of protected data 
 to disk by way of WAL files. However it is currently enabled globally for the 
 regionserver. Encryption of WAL entries should depend on whether or not an 
 entry in the WAL is to be stored within an encrypted column family.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10614) Master could not be stopped

2014-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13912701#comment-13912701
 ] 

Hudson commented on HBASE-10614:


FAILURE: Integrated in HBase-0.94 #1300 (See 
[https://builds.apache.org/job/HBase-0.94/1300/])
HBASE-10614 Master could not be stopped (Jingcheng Du) (stack: rev 1571918)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/catalog/MetaReader.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/master/handler/ServerShutdownHandler.java


 Master could not be stopped
 ---

 Key: HBASE-10614
 URL: https://issues.apache.org/jira/browse/HBASE-10614
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.16, 0.99.0
Reporter: Jingcheng Du
Assignee: Jingcheng Du
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-10614-0.94.patch, HBASE-10614.patch


  It's an issue when to run bin/hbase master stop to shutdown the cluster.
  This could be reproduced by the following steps. Particularly for the trunk 
 code, we need to configure the hbase.assignment.maximum.attempts as 1.
 1. Start one master and several region servers.
 2. Stop all the region servers.
 3. After a while, run bin/hbase master stop to shutdown the cluster.
  As a result, the master could not be stopped within a short time, but will 
 be stopped after several hours. And after it's stopped, i find the error logs.
 1. For the trunk:
   A. lots of the logs which are java.io.IOException: Failed to find 
 location, tableName=hbase:meta, row=, reload=true
   B..And at last, there's one exception before the master is stopped, 
 ServerShutdownHandler: Received exception accessing hbase:meta during server 
 shutdown of server-XXX, retrying hbase:meta read
 java.io.InterruptedIOException: Interrupted after 0 tries  on 350.
 2. For the branch 0.94: 
   A. lots of the logs which are Looked up root region location, 
 connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@44285d14;
  serverName=.
   B. And at last, there's one exception before the master is stopped, 
 ServerShutdownHandler: Received exception accessing META during server 
 shutdown of server-XXX, retrying META read
 org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to find 
 region for  after 140 tries.
  We could see the master are stopped after lots of reties which are not 
 necessary when the cluster is shutdown.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10617) Value lost if $ element is before column element in json when posted to Rest Server

2014-02-26 Thread Liu Shaohui (JIRA)
Liu Shaohui created HBASE-10617:
---

 Summary: Value lost if $ element is before column element in 
json when posted to Rest Server
 Key: HBASE-10617
 URL: https://issues.apache.org/jira/browse/HBASE-10617
 Project: HBase
  Issue Type: Bug
  Components: REST
Affects Versions: 0.94.11
Reporter: Liu Shaohui
Priority: Minor


When post following json data to rest server, it return 200, but the value is 
null in HBase
{code}
{Row: { key:cjI=, Cell: {$:ZGF0YTE=, column:ZjE6YzI=}}}
{code}

From rest server log, we found the length of value is null after the server 
paste the json to RowModel object
{code}
14/02/26 17:52:14 DEBUG rest.RowResource: PUT 
{totalColumns:1,families:{f1:[{timestamp:9223372036854775807,qualifier:c2,vlen:0}]},row:r2}
{code}

When the order is that column before $,  it works fine.
{code}
{Row: { key:cjI=, Cell: {column:ZjE6YzI=, $:ZGF0YTE= }}}
{code}

DIfferent json libs may have different order of this two elements even if 
column is put before $.





--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (HBASE-2166) Inconsistent/wrong treatment of timestamp parameters in the old Thrift API

2014-02-26 Thread Lars Francke (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Francke reassigned HBASE-2166:
---

Assignee: (was: Lars Francke)

 Inconsistent/wrong treatment of timestamp parameters in the old Thrift API
 --

 Key: HBASE-2166
 URL: https://issues.apache.org/jira/browse/HBASE-2166
 Project: HBase
  Issue Type: Bug
  Components: Thrift
Affects Versions: 0.20.2
Reporter: Lars Francke

 The old Thrift API treats timestamps wrong/inconsistent.
 getRowTs should return a specific version or null if that doesn't exist. 
 Currently it does seem to treat the timestamp as exclusive so the row is not 
 found. Same goes for getVerTs but there it might only be a documentation 
 problem.
 I'll go through the old API and will try to find any such problems and 
 correct them in a way that is consistent with the Java API where possible.
 Peter Falk reported this on the mailing list. Thanks!



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10614) Master could not be stopped

2014-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13912756#comment-13912756
 ] 

Hudson commented on HBASE-10614:


FAILURE: Integrated in hbase-0.96 #316 (See 
[https://builds.apache.org/job/hbase-0.96/316/])
HBASE-10614 Master could not be stopped (Jingcheng Du) (stack: rev 1571917)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/catalog/MetaReader.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/ServerShutdownHandler.java


 Master could not be stopped
 ---

 Key: HBASE-10614
 URL: https://issues.apache.org/jira/browse/HBASE-10614
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.16, 0.99.0
Reporter: Jingcheng Du
Assignee: Jingcheng Du
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-10614-0.94.patch, HBASE-10614.patch


  It's an issue when to run bin/hbase master stop to shutdown the cluster.
  This could be reproduced by the following steps. Particularly for the trunk 
 code, we need to configure the hbase.assignment.maximum.attempts as 1.
 1. Start one master and several region servers.
 2. Stop all the region servers.
 3. After a while, run bin/hbase master stop to shutdown the cluster.
  As a result, the master could not be stopped within a short time, but will 
 be stopped after several hours. And after it's stopped, i find the error logs.
 1. For the trunk:
   A. lots of the logs which are java.io.IOException: Failed to find 
 location, tableName=hbase:meta, row=, reload=true
   B..And at last, there's one exception before the master is stopped, 
 ServerShutdownHandler: Received exception accessing hbase:meta during server 
 shutdown of server-XXX, retrying hbase:meta read
 java.io.InterruptedIOException: Interrupted after 0 tries  on 350.
 2. For the branch 0.94: 
   A. lots of the logs which are Looked up root region location, 
 connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@44285d14;
  serverName=.
   B. And at last, there's one exception before the master is stopped, 
 ServerShutdownHandler: Received exception accessing META during server 
 shutdown of server-XXX, retrying META read
 org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to find 
 region for  after 140 tries.
  We could see the master are stopped after lots of reties which are not 
 necessary when the cluster is shutdown.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10617) Value lost if $ element is before column element in json when posted to Rest Server

2014-02-26 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13912802#comment-13912802
 ] 

Jean-Marc Spaggiari commented on HBASE-10617:
-

Hi Liu,

Going to send a patch? Or looking for someone to look at it?

 Value lost if $ element is before column element in json when posted to 
 Rest Server
 ---

 Key: HBASE-10617
 URL: https://issues.apache.org/jira/browse/HBASE-10617
 Project: HBase
  Issue Type: Bug
  Components: REST
Affects Versions: 0.94.11
Reporter: Liu Shaohui
Priority: Minor

 When post following json data to rest server, it return 200, but the value is 
 null in HBase
 {code}
 {Row: { key:cjI=, Cell: {$:ZGF0YTE=, column:ZjE6YzI=}}}
 {code}
 From rest server log, we found the length of value is null after the server 
 paste the json to RowModel object
 {code}
 14/02/26 17:52:14 DEBUG rest.RowResource: PUT 
 {totalColumns:1,families:{f1:[{timestamp:9223372036854775807,qualifier:c2,vlen:0}]},row:r2}
 {code}
 When the order is that column before $,  it works fine.
 {code}
 {Row: { key:cjI=, Cell: {column:ZjE6YzI=, $:ZGF0YTE= }}}
 {code}
 DIfferent json libs may have different order of this two elements even if 
 column is put before $.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-8304) Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured without default port.

2014-02-26 Thread haosdent (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

haosdent updated HBASE-8304:


Attachment: (was: HBASE-8304.patch)

 Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured 
 without default port.
 ---

 Key: HBASE-8304
 URL: https://issues.apache.org/jira/browse/HBASE-8304
 Project: HBase
  Issue Type: Bug
  Components: HFile, regionserver
Affects Versions: 0.94.5
Reporter: Raymond Liu
  Labels: bulkloader

 When fs.default.name or fs.defaultFS in hadoop core-site.xml is configured as 
 hdfs://ip, and hbase.rootdir is configured as hdfs://ip:port/hbaserootdir 
 where port is the hdfs namenode's default port. the bulkload operation will 
 not remove the file in bulk output dir. Store::bulkLoadHfile will think 
 hdfs:://ip and hdfs:://ip:port as different filesystem and go with copy 
 approaching instead of rename.
 The root cause is that hbase master will rewrite fs.default.name/fs.defaultFS 
 according to hbase.rootdir when regionserver started, thus, dest fs uri from 
 the hregion will not matching src fs uri passed from client.
 any suggestion what is the best approaching to fix this issue? 
 I kind of think that we could check for default port if src uri come without 
 port info.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10617) Value lost if $ element is before column element in json when posted to Rest Server

2014-02-26 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13912807#comment-13912807
 ] 

Liu Shaohui commented on HBASE-10617:
-

[~jmspaggi]
After visiting the HBase rest code, I think that it is JAXBContextResolver 
which unmarshals the json to the RowModel.
But i don't know how to fix it. Would someone who is familar with this code 
take a look at this issue?
Thx.

 Value lost if $ element is before column element in json when posted to 
 Rest Server
 ---

 Key: HBASE-10617
 URL: https://issues.apache.org/jira/browse/HBASE-10617
 Project: HBase
  Issue Type: Bug
  Components: REST
Affects Versions: 0.94.11
Reporter: Liu Shaohui
Priority: Minor

 When post following json data to rest server, it return 200, but the value is 
 null in HBase
 {code}
 {Row: { key:cjI=, Cell: {$:ZGF0YTE=, column:ZjE6YzI=}}}
 {code}
 From rest server log, we found the length of value is null after the server 
 paste the json to RowModel object
 {code}
 14/02/26 17:52:14 DEBUG rest.RowResource: PUT 
 {totalColumns:1,families:{f1:[{timestamp:9223372036854775807,qualifier:c2,vlen:0}]},row:r2}
 {code}
 When the order is that column before $,  it works fine.
 {code}
 {Row: { key:cjI=, Cell: {column:ZjE6YzI=, $:ZGF0YTE= }}}
 {code}
 DIfferent json libs may have different order of this two elements even if 
 column is put before $.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-8304) Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured without default port.

2014-02-26 Thread haosdent (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

haosdent updated HBASE-8304:


Attachment: HBASE-8304.patch

Fix error while compile with hadoop 1.0

 Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured 
 without default port.
 ---

 Key: HBASE-8304
 URL: https://issues.apache.org/jira/browse/HBASE-8304
 Project: HBase
  Issue Type: Bug
  Components: HFile, regionserver
Affects Versions: 0.94.5
Reporter: Raymond Liu
  Labels: bulkloader
 Attachments: HBASE-8304.patch


 When fs.default.name or fs.defaultFS in hadoop core-site.xml is configured as 
 hdfs://ip, and hbase.rootdir is configured as hdfs://ip:port/hbaserootdir 
 where port is the hdfs namenode's default port. the bulkload operation will 
 not remove the file in bulk output dir. Store::bulkLoadHfile will think 
 hdfs:://ip and hdfs:://ip:port as different filesystem and go with copy 
 approaching instead of rename.
 The root cause is that hbase master will rewrite fs.default.name/fs.defaultFS 
 according to hbase.rootdir when regionserver started, thus, dest fs uri from 
 the hregion will not matching src fs uri passed from client.
 any suggestion what is the best approaching to fix this issue? 
 I kind of think that we could check for default port if src uri come without 
 port info.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10617) Value lost if $ element is before column element in json when posted to Rest Server

2014-02-26 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13912814#comment-13912814
 ] 

Jean-Marc Spaggiari commented on HBASE-10617:
-

Is this:
{code}
{Row: { key:cjI=, Cell: {$:ZGF0YTE=, column:ZjE6YzI=}}}
{code}

Related to this? :
{code}
14/02/26 17:52:14 DEBUG rest.RowResource: PUT 
{totalColumns:1,families:{f1:[{timestamp:9223372036854775807,qualifier:c2,vlen:0}]},row:r2}
{code}

Or it's from 2 different calls? Because key seems to be different between the 
2. So is qualifier.

Do you have the exact URL to send to the rest server from your client side? Can 
you capture it and past it here?

 Value lost if $ element is before column element in json when posted to 
 Rest Server
 ---

 Key: HBASE-10617
 URL: https://issues.apache.org/jira/browse/HBASE-10617
 Project: HBase
  Issue Type: Bug
  Components: REST
Affects Versions: 0.94.11
Reporter: Liu Shaohui
Priority: Minor

 When post following json data to rest server, it return 200, but the value is 
 null in HBase
 {code}
 {Row: { key:cjI=, Cell: {$:ZGF0YTE=, column:ZjE6YzI=}}}
 {code}
 From rest server log, we found the length of value is null after the server 
 paste the json to RowModel object
 {code}
 14/02/26 17:52:14 DEBUG rest.RowResource: PUT 
 {totalColumns:1,families:{f1:[{timestamp:9223372036854775807,qualifier:c2,vlen:0}]},row:r2}
 {code}
 When the order is that column before $,  it works fine.
 {code}
 {Row: { key:cjI=, Cell: {column:ZjE6YzI=, $:ZGF0YTE= }}}
 {code}
 DIfferent json libs may have different order of this two elements even if 
 column is put before $.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10617) Value lost if $ element is before column element in json when posted to Rest Server

2014-02-26 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13912819#comment-13912819
 ] 

Liu Shaohui commented on HBASE-10617:
-

[~jmspaggi]
The json data is related to the log. cjI= is the  r2 base64 encode ,  and 
same for ZjE6YzI= - f1:c1,  ZGF0YTE= - data1

The test cmd is:
{code}
 curl -v -X PUT http://rest-server-ip:port/rest_test/r2/f1:c1 -H Content-Type: 
application/json --data '{Row:{key:cjI=, Cell: {$:ZGF0YTE=, 
column:ZjE6YzI=}}}'
{code}


 Value lost if $ element is before column element in json when posted to 
 Rest Server
 ---

 Key: HBASE-10617
 URL: https://issues.apache.org/jira/browse/HBASE-10617
 Project: HBase
  Issue Type: Bug
  Components: REST
Affects Versions: 0.94.11
Reporter: Liu Shaohui
Priority: Minor

 When post following json data to rest server, it return 200, but the value is 
 null in HBase
 {code}
 {Row: { key:cjI=, Cell: {$:ZGF0YTE=, column:ZjE6YzI=}}}
 {code}
 From rest server log, we found the length of value is null after the server 
 paste the json to RowModel object
 {code}
 14/02/26 17:52:14 DEBUG rest.RowResource: PUT 
 {totalColumns:1,families:{f1:[{timestamp:9223372036854775807,qualifier:c2,vlen:0}]},row:r2}
 {code}
 When the order is that column before $,  it works fine.
 {code}
 {Row: { key:cjI=, Cell: {column:ZjE6YzI=, $:ZGF0YTE= }}}
 {code}
 DIfferent json libs may have different order of this two elements even if 
 column is put before $.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10617) Value lost if $ element is before column element in json when posted to Rest Server

2014-02-26 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13912822#comment-13912822
 ] 

Liu Shaohui commented on HBASE-10617:
-

[~jmspaggi]
A typo.
{code}
 curl -v -X PUT http://rest-server-ip:port/t1/r2/f1:c1 -H Content-Type: 
application/json --data '{Row:{key:cjI=, Cell: {$:ZGF0YTE=, 
column:ZjE6YzI=}}}'
{code}

 Value lost if $ element is before column element in json when posted to 
 Rest Server
 ---

 Key: HBASE-10617
 URL: https://issues.apache.org/jira/browse/HBASE-10617
 Project: HBase
  Issue Type: Bug
  Components: REST
Affects Versions: 0.94.11
Reporter: Liu Shaohui
Priority: Minor

 When post following json data to rest server, it return 200, but the value is 
 null in HBase
 {code}
 {Row: { key:cjI=, Cell: {$:ZGF0YTE=, column:ZjE6YzI=}}}
 {code}
 From rest server log, we found the length of value is null after the server 
 paste the json to RowModel object
 {code}
 14/02/26 17:52:14 DEBUG rest.RowResource: PUT 
 {totalColumns:1,families:{f1:[{timestamp:9223372036854775807,qualifier:c2,vlen:0}]},row:r2}
 {code}
 When the order is that column before $,  it works fine.
 {code}
 {Row: { key:cjI=, Cell: {column:ZjE6YzI=, $:ZGF0YTE= }}}
 {code}
 DIfferent json libs may have different order of this two elements even if 
 column is put before $.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10566) cleanup rpcTimeout in the client

2014-02-26 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13912826#comment-13912826
 ] 

Nicolas Liochon commented on HBASE-10566:
-

bq. Will do as an addendum. 
Done.

 cleanup rpcTimeout in the client
 

 Key: HBASE-10566
 URL: https://issues.apache.org/jira/browse/HBASE-10566
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10566.sample.patch, 10566.v1.patch, 10566.v2.patch, 
 10566.v3.patch


 There are two issues:
 1) A confusion between the socket timeout and the call timeout
 Socket timeouts should be minimal: a default like 20 seconds, that could be 
 lowered to single digits timeouts for some apps: if we can not write to the 
 socket in 10 second, we have an issue. This is different from the total 
 duration (send query + do query + receive query), that can be longer, as it 
 can include remotes calls on the server and so on. Today, we have a single 
 value, it does not allow us to have low socket read timeouts.
 2) The timeout can be different between the calls. Typically, if the total 
 time, retries included is 60 seconds but failed after 2 seconds, then the 
 remaining is 58s. HBase does this today, but by hacking with a thread local 
 storage variable. It's a hack (it should have been a parameter of the 
 methods, the TLS allowed to bypass all the layers. May be protobuf makes this 
 complicated, to be confirmed), but as well it does not really work, because 
 we can have multithreading issues (we use the updated rpc timeout of someone 
 else, or we create a new BlockingRpcChannelImplementation with a random 
 default timeout).
 Ideally, we could send the call timeout to the server as well: it will be 
 able to dismiss alone the calls that it received but git stick in the request 
 queue or in the internal retries (on hdfs for example).
 This will make the system more reactive to failure.
 I think we can solve this now, especially after 10525. The main issue is to 
 something that fits well with protobuf...
 Then it should be easy to have a pool of thread for writers and readers, w/o 
 a single thread per region server as today. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10566) cleanup rpcTimeout in the client

2014-02-26 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10566:


Release Note: 
3 new settings are now available to configure the socket in the HBase client:
- connect timeout: hbase.ipc.client.socket.timeout.connect (milliseconds, 
default: 10 seconds)
- write timeout: hbase.ipc.client.socket.timeout.read (milliseconds, default: 
20 seconds)
- read timeout: hbase.ipc.client.socket.timeout.write (milliseconds, default: 
60 seconds)

ipc.socket.timeout is not used anymore.
The per operation timeout is still controled by hbase.rpc.timeout 


  was:
3 settings are now available to configure the socket in the HBase client:
- connect timeout: ipc.socket.timeout.connect (default: 10 seconds)
- write timeout: ipc.socket.timeout.read (default: 20 seconds)
- read timeout: ipc.socket.timeout.write (default: 60 seconds)

The per operation timeout is still controled by hbase.rpc.timeout 



 cleanup rpcTimeout in the client
 

 Key: HBASE-10566
 URL: https://issues.apache.org/jira/browse/HBASE-10566
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10566.sample.patch, 10566.v1.patch, 10566.v2.patch, 
 10566.v3.patch


 There are two issues:
 1) A confusion between the socket timeout and the call timeout
 Socket timeouts should be minimal: a default like 20 seconds, that could be 
 lowered to single digits timeouts for some apps: if we can not write to the 
 socket in 10 second, we have an issue. This is different from the total 
 duration (send query + do query + receive query), that can be longer, as it 
 can include remotes calls on the server and so on. Today, we have a single 
 value, it does not allow us to have low socket read timeouts.
 2) The timeout can be different between the calls. Typically, if the total 
 time, retries included is 60 seconds but failed after 2 seconds, then the 
 remaining is 58s. HBase does this today, but by hacking with a thread local 
 storage variable. It's a hack (it should have been a parameter of the 
 methods, the TLS allowed to bypass all the layers. May be protobuf makes this 
 complicated, to be confirmed), but as well it does not really work, because 
 we can have multithreading issues (we use the updated rpc timeout of someone 
 else, or we create a new BlockingRpcChannelImplementation with a random 
 default timeout).
 Ideally, we could send the call timeout to the server as well: it will be 
 able to dismiss alone the calls that it received but git stick in the request 
 queue or in the internal retries (on hdfs for example).
 This will make the system more reactive to failure.
 I think we can solve this now, especially after 10525. The main issue is to 
 something that fits well with protobuf...
 Then it should be easy to have a pool of thread for writers and readers, w/o 
 a single thread per region server as today. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10436) restore regionserver lists removed from hbase 0.96+ jmx

2014-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13912829#comment-13912829
 ] 

Hudson commented on HBASE-10436:


SUCCESS: Integrated in hbase-0.96-hadoop2 #217 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/217/])
HBASE-10436 restore regionserver lists removed from hbase 0.96.0 jmx (jmhsieh: 
rev 1571888)
* 
/hbase/branches/0.96/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/master/MetricsMasterSource.java
* 
/hbase/branches/0.96/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/master/MetricsMasterWrapper.java
* 
/hbase/branches/0.96/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/master/MetricsMasterSourceImpl.java
* 
/hbase/branches/0.96/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/MetricsMasterSourceImpl.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetricsMasterWrapperImpl.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterMetricsWrapper.java


 restore regionserver lists removed from hbase 0.96+ jmx
 ---

 Key: HBASE-10436
 URL: https://issues.apache.org/jira/browse/HBASE-10436
 Project: HBase
  Issue Type: Bug
  Components: metrics
Affects Versions: 0.98.0, 0.96.0, 0.99.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
Priority: Critical
 Fix For: 0.96.2, 0.98.1, 0.99.0

 Attachments: hbase-10436.notbean.patch, hbase-10436.patch, 
 hbase-10436.v2.patch


 HBase 0.96's refactored jmx beans do not contain the master's list of dead 
 region servers and live regionservers with load info.  HBase 0.94 did (though 
 in a single monolithic blob).  
 This JMX interface should be considered as much of an API as the the normal 
 wire or java api.  Dropping values from this was done without deprecation and 
 the removal of this information is a functional regression.
 We should provide the information in the 0.96+ JMX.  HBase 0.94 had a  
 monolithic JMX blob (hadoop:service=Master,name=Master)  that contained a 
 lot of information, including the regionserver list and the cached 
 regionserver load for each region  found on the master webpage.  0.96+ 
 refactored jmx this into several jmx beans which could be selectively 
 retrieved.  These include:
 * hadoop:service=HBase,name=Master,sub=AssignmentManager
 * hadoop:service=HBase,name=Master,sub=Balancer
 * hadoop:service=HBase,name=Master,sub=Server
 * hadoop:service=HBase,name=Master,sub=FileSystem
 Specifically the (Hadoop:service=HBase,name=Master,sub=Server) listing that 
 used to contain regionservers and deadregionservers in jmx were replaced in   
 with numRegionServers and numDeadRegionservers which only contain counts.  
 I propose just adding another mbean called RegionServers under the bean: 
 hadoop:service=HBase,name=Master,sub=RegionServers



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10608) Acquire the FS Delegation Token for Secure ExportSnapshot

2014-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13912830#comment-13912830
 ] 

Hudson commented on HBASE-10608:


SUCCESS: Integrated in hbase-0.96-hadoop2 #217 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/217/])
HBASE-10608 Acquire the FS Delegation Token for Secure ExportSnapshot 
(mbertozzi: rev 1571893)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/FsDelegationToken.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestSecureExportSnapshot.java


 Acquire the FS Delegation Token for Secure ExportSnapshot
 -

 Key: HBASE-10608
 URL: https://issues.apache.org/jira/browse/HBASE-10608
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.98.0, 0.94.16, 0.96.1.1
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.96.2, 0.98.1, 0.99.0

 Attachments: HBASE-10608-v0.patch


 Export Snapshot is missing the delegation token acquisition for working with 
 remote secure clusters



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10614) Master could not be stopped

2014-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13912831#comment-13912831
 ] 

Hudson commented on HBASE-10614:


SUCCESS: Integrated in hbase-0.96-hadoop2 #217 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/217/])
HBASE-10614 Master could not be stopped (Jingcheng Du) (stack: rev 1571917)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/catalog/MetaReader.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/ServerShutdownHandler.java


 Master could not be stopped
 ---

 Key: HBASE-10614
 URL: https://issues.apache.org/jira/browse/HBASE-10614
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.16, 0.99.0
Reporter: Jingcheng Du
Assignee: Jingcheng Du
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-10614-0.94.patch, HBASE-10614.patch


  It's an issue when to run bin/hbase master stop to shutdown the cluster.
  This could be reproduced by the following steps. Particularly for the trunk 
 code, we need to configure the hbase.assignment.maximum.attempts as 1.
 1. Start one master and several region servers.
 2. Stop all the region servers.
 3. After a while, run bin/hbase master stop to shutdown the cluster.
  As a result, the master could not be stopped within a short time, but will 
 be stopped after several hours. And after it's stopped, i find the error logs.
 1. For the trunk:
   A. lots of the logs which are java.io.IOException: Failed to find 
 location, tableName=hbase:meta, row=, reload=true
   B..And at last, there's one exception before the master is stopped, 
 ServerShutdownHandler: Received exception accessing hbase:meta during server 
 shutdown of server-XXX, retrying hbase:meta read
 java.io.InterruptedIOException: Interrupted after 0 tries  on 350.
 2. For the branch 0.94: 
   A. lots of the logs which are Looked up root region location, 
 connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@44285d14;
  serverName=.
   B. And at last, there's one exception before the master is stopped, 
 ServerShutdownHandler: Received exception accessing META during server 
 shutdown of server-XXX, retrying META read
 org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to find 
 region for  after 140 tries.
  We could see the master are stopped after lots of reties which are not 
 necessary when the cluster is shutdown.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10600) HTable#batch() should perform validation on empty Put

2014-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13912845#comment-13912845
 ] 

Hudson commented on HBASE-10600:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #174 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/174/])
HBASE-10600 HTable#batch() should perform validation on empty Put (tedyu: rev 
1571900)
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide3.java


 HTable#batch() should perform validation on empty Put
 -

 Key: HBASE-10600
 URL: https://issues.apache.org/jira/browse/HBASE-10600
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Fix For: 0.98.1, 0.99.0

 Attachments: 10600-v1.txt, 10600-v2.txt, 10600-v3.txt, 10600-v4.txt


 Raised by java8964 in this thread:
 http://osdir.com/ml/general/2014-02/msg44384.html
 When an empty Put is passed in the List to HTable#batch(), there is no 
 validation performed whereas IllegalArgumentException would have been thrown 
 if this empty Put in the simple Put API call.
 Validation on empty Put should be carried out in HTable#batch().



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10614) Master could not be stopped

2014-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13912846#comment-13912846
 ] 

Hudson commented on HBASE-10614:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #174 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/174/])
HBASE-10614 Master could not be stopped (Jingcheng Du) (stack: rev 1571916)
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/catalog/MetaReader.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/ServerShutdownHandler.java


 Master could not be stopped
 ---

 Key: HBASE-10614
 URL: https://issues.apache.org/jira/browse/HBASE-10614
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.16, 0.99.0
Reporter: Jingcheng Du
Assignee: Jingcheng Du
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-10614-0.94.patch, HBASE-10614.patch


  It's an issue when to run bin/hbase master stop to shutdown the cluster.
  This could be reproduced by the following steps. Particularly for the trunk 
 code, we need to configure the hbase.assignment.maximum.attempts as 1.
 1. Start one master and several region servers.
 2. Stop all the region servers.
 3. After a while, run bin/hbase master stop to shutdown the cluster.
  As a result, the master could not be stopped within a short time, but will 
 be stopped after several hours. And after it's stopped, i find the error logs.
 1. For the trunk:
   A. lots of the logs which are java.io.IOException: Failed to find 
 location, tableName=hbase:meta, row=, reload=true
   B..And at last, there's one exception before the master is stopped, 
 ServerShutdownHandler: Received exception accessing hbase:meta during server 
 shutdown of server-XXX, retrying hbase:meta read
 java.io.InterruptedIOException: Interrupted after 0 tries  on 350.
 2. For the branch 0.94: 
   A. lots of the logs which are Looked up root region location, 
 connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@44285d14;
  serverName=.
   B. And at last, there's one exception before the master is stopped, 
 ServerShutdownHandler: Received exception accessing META during server 
 shutdown of server-XXX, retrying META read
 org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to find 
 region for  after 140 tries.
  We could see the master are stopped after lots of reties which are not 
 necessary when the cluster is shutdown.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10617) Value lost if $ element is before column element in json when posted to Rest Server

2014-02-26 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13912849#comment-13912849
 ] 

Jean-Marc Spaggiari commented on HBASE-10617:
-

Ok. I'm able to reproduce. Looking at it now.

BTW, ZjE6YzI= - f1:c2

What I have seen so far:
If $ is places first, CF and qualifiers are taken from the URL (/t1/r2/f1:c1). 
When $ is places second, then CF:Q is taken from the JSon.
Will keep you posted.

 Value lost if $ element is before column element in json when posted to 
 Rest Server
 ---

 Key: HBASE-10617
 URL: https://issues.apache.org/jira/browse/HBASE-10617
 Project: HBase
  Issue Type: Bug
  Components: REST
Affects Versions: 0.94.11
Reporter: Liu Shaohui
Priority: Minor

 When post following json data to rest server, it return 200, but the value is 
 null in HBase
 {code}
 {Row: { key:cjI=, Cell: {$:ZGF0YTE=, column:ZjE6YzI=}}}
 {code}
 From rest server log, we found the length of value is null after the server 
 paste the json to RowModel object
 {code}
 14/02/26 17:52:14 DEBUG rest.RowResource: PUT 
 {totalColumns:1,families:{f1:[{timestamp:9223372036854775807,qualifier:c2,vlen:0}]},row:r2}
 {code}
 When the order is that column before $,  it works fine.
 {code}
 {Row: { key:cjI=, Cell: {column:ZjE6YzI=, $:ZGF0YTE= }}}
 {code}
 DIfferent json libs may have different order of this two elements even if 
 column is put before $.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10606) Bad timeout in RpcRetryingCaller#callWithRetries w/o parameters

2014-02-26 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10606:


Status: Open  (was: Patch Available)

 Bad timeout in RpcRetryingCaller#callWithRetries w/o parameters
 ---

 Key: HBASE-10606
 URL: https://issues.apache.org/jira/browse/HBASE-10606
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10606.v1.patch, 10606.v2.patch, 10606.v4.patch


 When we call this method w/o parameters, we don't take into account the 
 configuration, but use the hardcoded default (Integer.MAX).
 If someone was relying on having an infinite timeout whatever the setting, 
 fixing this bug will cause him a surprise. But there is no magic...



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10606) Bad timeout in RpcRetryingCaller#callWithRetries w/o parameters

2014-02-26 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10606:


Attachment: 10606.v4.patch

 Bad timeout in RpcRetryingCaller#callWithRetries w/o parameters
 ---

 Key: HBASE-10606
 URL: https://issues.apache.org/jira/browse/HBASE-10606
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10606.v1.patch, 10606.v2.patch, 10606.v4.patch


 When we call this method w/o parameters, we don't take into account the 
 configuration, but use the hardcoded default (Integer.MAX).
 If someone was relying on having an infinite timeout whatever the setting, 
 fixing this bug will cause him a surprise. But there is no magic...



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10606) Bad timeout in RpcRetryingCaller#callWithRetries w/o parameters

2014-02-26 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10606:


Status: Patch Available  (was: Open)

 Bad timeout in RpcRetryingCaller#callWithRetries w/o parameters
 ---

 Key: HBASE-10606
 URL: https://issues.apache.org/jira/browse/HBASE-10606
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10606.v1.patch, 10606.v2.patch, 10606.v4.patch


 When we call this method w/o parameters, we don't take into account the 
 configuration, but use the hardcoded default (Integer.MAX).
 If someone was relying on having an infinite timeout whatever the setting, 
 fixing this bug will cause him a surprise. But there is no magic...



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10606) Bad timeout in RpcRetryingCaller#callWithRetries w/o parameters

2014-02-26 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13912861#comment-13912861
 ] 

Nicolas Liochon commented on HBASE-10606:
-

bq. My opinion: why include it you don't use it.
I don't use it because I don't test this part :-). I wrote the test on HTable, 
and I used the setter there. But anyway, I removed the setter in the last 
version.

bq. By all means, let them bubble up! No over-catching.
At the end, I changed my mind and put the throwable back, checking for 
Interrupted exception. At least it's 'as before'. 

Other comments taken into account as well.

v4 is what i will commit if the tests are green again.

 Bad timeout in RpcRetryingCaller#callWithRetries w/o parameters
 ---

 Key: HBASE-10606
 URL: https://issues.apache.org/jira/browse/HBASE-10606
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10606.v1.patch, 10606.v2.patch, 10606.v4.patch


 When we call this method w/o parameters, we don't take into account the 
 configuration, but use the hardcoded default (Integer.MAX).
 If someone was relying on having an infinite timeout whatever the setting, 
 fixing this bug will cause him a surprise. But there is no magic...



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-4955) Use the official versions of surefire junit

2014-02-26 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13912882#comment-13912882
 ] 

Nicolas Liochon commented on HBASE-4955:


Still waiting for 2.17.

 Use the official versions of surefire  junit
 -

 Key: HBASE-4955
 URL: https://issues.apache.org/jira/browse/HBASE-4955
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 0.94.0, 0.98.0, 0.96.0
 Environment: all
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Critical
 Attachments: 4955.v1.patch, 4955.v2.patch, 4955.v2.patch, 
 4955.v2.patch, 4955.v2.patch, 4955.v3.patch, 4955.v3.patch, 4955.v3.patch, 
 4955.v4.patch, 4955.v4.patch, 4955.v4.patch, 4955.v4.patch, 4955.v4.patch, 
 4955.v4.patch, 4955.v5.patch, 4955.v6.patch, 4955.v7.patch, 4955.v7.patch, 
 4955.v8.patch, 8204.v4.patch


 We currently use private versions for Surefire  JUnit since HBASE-4763.
 This JIRA traks what we need to move to official versions.
 Surefire 2.11 is just out, but, after some tests, it does not contain all 
 what we need.
 JUnit. Could be for JUnit 4.11. Issue to monitor:
 https://github.com/KentBeck/junit/issues/359: fixed in our version, no 
 feedback for an integration on trunk
 Surefire: Could be for Surefire 2.12. Issues to monitor are:
 329 (category support): fixed, we use the official implementation from the 
 trunk
 786 (@Category with forkMode=always): fixed, we use the official 
 implementation from the trunk
 791 (incorrect elapsed time on test failure): fixed, we use the official 
 implementation from the trunk
 793 (incorrect time in the XML report): Not fixed (reopen) on trunk, fixed on 
 our version.
 760 (does not take into account the test method): fixed in trunk, not fixed 
 in our version
 798 (print immediately the test class name): not fixed in trunk, not fixed in 
 our version
 799 (Allow test parallelization when forkMode=always): not fixed in trunk, 
 not fixed in our version
 800 (redirectTestOutputToFile not taken into account): not yet fix on trunk, 
 fixed on our version
 800  793 are the more important to monitor, it's the only ones that are 
 fixed in our version but not on trunk.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HBASE-7840) Enhance the java it framework to start stop a distributed hbase hadoop cluster

2014-02-26 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon resolved HBASE-7840.


Resolution: Won't Fix

Too old.

 Enhance the java it framework to start  stop a distributed hbase  hadoop 
 cluster 
 ---

 Key: HBASE-7840
 URL: https://issues.apache.org/jira/browse/HBASE-7840
 Project: HBase
  Issue Type: New Feature
  Components: test
Affects Versions: 0.95.2
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Minor
 Fix For: 0.99.0

 Attachments: 7840.v1.patch, 7840.v3.patch, 7840.v4.patch


 Needs are to use a development version of HBase  HDFS 1  2.
 Ideally, should be nicely backportable to 0.94 to allow comparisons and 
 regression tests between versions.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10355) Failover RPC's from client using region replicas

2014-02-26 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13912892#comment-13912892
 ] 

Nicolas Liochon commented on HBASE-10355:
-

bq. This has to be public?
I think so, as it's in the interface. But it's in ClusterConnection, not 
HConnection.

bq. should call out use of Replicas. ...WithReadReplicas?
Ok, will do.

bq. Something that subclasses HBaseIOE? (Think Benoit).
Will do as well.

bq Can we avoid HRegion having to know about 'replicas'?
I don't know how :-).

bq. This is repeated code? In HRegion and in HRegionServer.
I can move the  
!ServerRegionReplicaUtil.isDefaultReplica(this.getRegionInfo()); into HRI?
Then the code would be 
ClientProtos.Result pbr = ProtobufUtil.toResult(existence, 
this.getRegionInfo().isDefaultReplica());
It would be more object oriented, like in the 90'

 Failover RPC's from client using region replicas
 

 Key: HBASE-10355
 URL: https://issues.apache.org/jira/browse/HBASE-10355
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Reporter: Enis Soztutar
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10355.v1.patch, 10355.v2.patch, 10355.v3.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-8803) region_mover.rb should move multiple regions at a time

2014-02-26 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-8803:
---

Attachment: HBASE-8803-v9-trunk.patch

 region_mover.rb should move multiple regions at a time
 --

 Key: HBASE-8803
 URL: https://issues.apache.org/jira/browse/HBASE-8803
 Project: HBase
  Issue Type: Bug
  Components: Usability
Affects Versions: 0.94.16, 0.98.1, 0.96.1.1
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: 8803v5.txt, HBASE-8803-v0-trunk.patch, 
 HBASE-8803-v1-0.94.patch, HBASE-8803-v1-trunk.patch, 
 HBASE-8803-v2-0.94.patch, HBASE-8803-v2-0.94.patch, HBASE-8803-v3-0.94.patch, 
 HBASE-8803-v4-0.94.patch, HBASE-8803-v4-trunk.patch, 
 HBASE-8803-v5-0.94.patch, HBASE-8803-v6-0.94.patch, 
 HBASE-8803-v6-trunk.patch, HBASE-8803-v7-trunk.patch, 
 HBASE-8803-v9-trunk.patch

   Original Estimate: 48h
  Remaining Estimate: 48h

 When there is many regions in a cluster, rolling_restart can take hours 
 because region_mover is moving the regions one by one.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-8803) region_mover.rb should move multiple regions at a time

2014-02-26 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-8803:
---

Status: Open  (was: Patch Available)

 region_mover.rb should move multiple regions at a time
 --

 Key: HBASE-8803
 URL: https://issues.apache.org/jira/browse/HBASE-8803
 Project: HBase
  Issue Type: Bug
  Components: Usability
Affects Versions: 0.96.1.1, 0.94.16, 0.98.1
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: 8803v5.txt, HBASE-8803-v0-trunk.patch, 
 HBASE-8803-v1-0.94.patch, HBASE-8803-v1-trunk.patch, 
 HBASE-8803-v2-0.94.patch, HBASE-8803-v2-0.94.patch, HBASE-8803-v3-0.94.patch, 
 HBASE-8803-v4-0.94.patch, HBASE-8803-v4-trunk.patch, 
 HBASE-8803-v5-0.94.patch, HBASE-8803-v6-0.94.patch, 
 HBASE-8803-v6-trunk.patch, HBASE-8803-v7-trunk.patch, 
 HBASE-8803-v9-trunk.patch

   Original Estimate: 48h
  Remaining Estimate: 48h

 When there is many regions in a cluster, rolling_restart can take hours 
 because region_mover is moving the regions one by one.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10608) Acquire the FS Delegation Token for Secure ExportSnapshot

2014-02-26 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13912895#comment-13912895
 ] 

Anoop Sam John commented on HBASE-10608:


In 
/hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/FsDelegationToken.java
  the package is declared as
package org.apache.hadoop.hbase.security;  !!



 Acquire the FS Delegation Token for Secure ExportSnapshot
 -

 Key: HBASE-10608
 URL: https://issues.apache.org/jira/browse/HBASE-10608
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.98.0, 0.94.16, 0.96.1.1
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.96.2, 0.98.1, 0.99.0

 Attachments: HBASE-10608-v0.patch


 Export Snapshot is missing the delegation token acquisition for working with 
 remote secure clusters



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-8803) region_mover.rb should move multiple regions at a time

2014-02-26 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-8803:
---

Status: Patch Available  (was: Open)

I see. Thanks again for looking at it.

Modified accordingly

{quote}
Might have to check for an empty HBASE_CONF_DIR
{quote}

rolling_restart calls hbase-config.sh at the beginning, and from 
hbase-config.sh we have
{code}
HBASE_CONF_DIR=${HBASE_CONF_DIR:-$HBASE_HOME/conf}
{code}

And HBASE_HOME can not be empty since few lines before we have this:
{code}
if [ -z $HBASE_HOME ]; then
  export HBASE_HOME=`dirname $this`/..
fi
{code}

So I don't think we need to add any test since HBASE_CONF_DIR can not be empty.

 region_mover.rb should move multiple regions at a time
 --

 Key: HBASE-8803
 URL: https://issues.apache.org/jira/browse/HBASE-8803
 Project: HBase
  Issue Type: Bug
  Components: Usability
Affects Versions: 0.96.1.1, 0.94.16, 0.98.1
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: 8803v5.txt, HBASE-8803-v0-trunk.patch, 
 HBASE-8803-v1-0.94.patch, HBASE-8803-v1-trunk.patch, 
 HBASE-8803-v2-0.94.patch, HBASE-8803-v2-0.94.patch, HBASE-8803-v3-0.94.patch, 
 HBASE-8803-v4-0.94.patch, HBASE-8803-v4-trunk.patch, 
 HBASE-8803-v5-0.94.patch, HBASE-8803-v6-0.94.patch, 
 HBASE-8803-v6-trunk.patch, HBASE-8803-v7-trunk.patch, 
 HBASE-8803-v9-trunk.patch

   Original Estimate: 48h
  Remaining Estimate: 48h

 When there is many regions in a cluster, rolling_restart can take hours 
 because region_mover is moving the regions one by one.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10618) User should not be allowed to disable/drop visibility labels table

2014-02-26 Thread Anoop Sam John (JIRA)
Anoop Sam John created HBASE-10618:
--

 Summary: User should not be allowed to disable/drop visibility 
labels table
 Key: HBASE-10618
 URL: https://issues.apache.org/jira/browse/HBASE-10618
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.98.1, 0.99.0


Deny all DDL operation on labels table like add/delete cf etc.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10080) Unnecessary call to locateRegion when creating an HTable instance

2014-02-26 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10080:


Status: Open  (was: Patch Available)

 Unnecessary call to locateRegion when creating an HTable instance
 -

 Key: HBASE-10080
 URL: https://issues.apache.org/jira/browse/HBASE-10080
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.96.0, 0.98.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Trivial
 Fix For: 0.98.1

 Attachments: 10080.v1.patch


 It's more or less in contradiction with the objective of having lightweight 
 HTable objects and the data may be stale when we will use it 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10080) Unnecessary call to locateRegion when creating an HTable instance

2014-02-26 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10080:


Status: Patch Available  (was: Open)

 Unnecessary call to locateRegion when creating an HTable instance
 -

 Key: HBASE-10080
 URL: https://issues.apache.org/jira/browse/HBASE-10080
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.96.0, 0.98.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Trivial
 Fix For: 0.98.1

 Attachments: 10080.v1.patch, 10080.v2.patch


 It's more or less in contradiction with the objective of having lightweight 
 HTable objects and the data may be stale when we will use it 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10080) Unnecessary call to locateRegion when creating an HTable instance

2014-02-26 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10080:


Attachment: 10080.v2.patch

 Unnecessary call to locateRegion when creating an HTable instance
 -

 Key: HBASE-10080
 URL: https://issues.apache.org/jira/browse/HBASE-10080
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Trivial
 Fix For: 0.98.1

 Attachments: 10080.v1.patch, 10080.v2.patch


 It's more or less in contradiction with the objective of having lightweight 
 HTable objects and the data may be stale when we will use it 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10080) Unnecessary call to locateRegion when creating an HTable instance

2014-02-26 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13912911#comment-13912911
 ] 

Nicolas Liochon commented on HBASE-10080:
-

I would like to
- make a HTable creation as cheap as possible
- limit to the maximum the access to .meta. and/or the cache.

This jira is a drop in this ocean.

 Unnecessary call to locateRegion when creating an HTable instance
 -

 Key: HBASE-10080
 URL: https://issues.apache.org/jira/browse/HBASE-10080
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Trivial
 Fix For: 0.98.1

 Attachments: 10080.v1.patch, 10080.v2.patch


 It's more or less in contradiction with the objective of having lightweight 
 HTable objects and the data may be stale when we will use it 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10619) Don't allow user to disable/drop NS table

2014-02-26 Thread Anoop Sam John (JIRA)
Anoop Sam John created HBASE-10619:
--

 Summary: Don't allow user to disable/drop NS table
 Key: HBASE-10619
 URL: https://issues.apache.org/jira/browse/HBASE-10619
 Project: HBase
  Issue Type: Bug
Reporter: Anoop Sam John
Assignee: Anoop Sam John


We should treat NS table just like META table, which is having checks so that 
user can not disable/drop.





--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-8304) Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured without default port.

2014-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13912960#comment-13912960
 ] 

Hadoop QA commented on HBASE-8304:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12631204/HBASE-8304.patch
  against trunk revision .
  ATTACHMENT ID: 12631204

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+Method getNNAddressesMethod = 
dfsUtilClazz.getMethod(getNNServiceRpcAddresses, Configuration.class);
+(MapString, MapString, InetSocketAddress) 
getNNAddressesMethod.invoke(null, conf);

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8812//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8812//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8812//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8812//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8812//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8812//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8812//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8812//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8812//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8812//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8812//console

This message is automatically generated.

 Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured 
 without default port.
 ---

 Key: HBASE-8304
 URL: https://issues.apache.org/jira/browse/HBASE-8304
 Project: HBase
  Issue Type: Bug
  Components: HFile, regionserver
Affects Versions: 0.94.5
Reporter: Raymond Liu
  Labels: bulkloader
 Attachments: HBASE-8304.patch


 When fs.default.name or fs.defaultFS in hadoop core-site.xml is configured as 
 hdfs://ip, and hbase.rootdir is configured as hdfs://ip:port/hbaserootdir 
 where port is the hdfs namenode's default port. the bulkload operation will 
 not remove the file in bulk output dir. Store::bulkLoadHfile will think 
 hdfs:://ip and hdfs:://ip:port as different filesystem and go with copy 
 approaching instead of rename.
 The root cause is that hbase master will rewrite fs.default.name/fs.defaultFS 
 according to hbase.rootdir when regionserver started, thus, dest fs uri from 
 the hregion will not matching src fs uri passed from client.
 any suggestion what is the best approaching to fix this issue? 
 I kind of think that we could check for default port if src uri come without 
 port info.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-8304) Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured without default port.

2014-02-26 Thread haosdent (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13912964#comment-13912964
 ] 

haosdent commented on HBASE-8304:
-

Would add test cases later.

 Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured 
 without default port.
 ---

 Key: HBASE-8304
 URL: https://issues.apache.org/jira/browse/HBASE-8304
 Project: HBase
  Issue Type: Bug
  Components: HFile, regionserver
Affects Versions: 0.94.5
Reporter: Raymond Liu
  Labels: bulkloader
 Attachments: HBASE-8304.patch


 When fs.default.name or fs.defaultFS in hadoop core-site.xml is configured as 
 hdfs://ip, and hbase.rootdir is configured as hdfs://ip:port/hbaserootdir 
 where port is the hdfs namenode's default port. the bulkload operation will 
 not remove the file in bulk output dir. Store::bulkLoadHfile will think 
 hdfs:://ip and hdfs:://ip:port as different filesystem and go with copy 
 approaching instead of rename.
 The root cause is that hbase master will rewrite fs.default.name/fs.defaultFS 
 according to hbase.rootdir when regionserver started, thus, dest fs uri from 
 the hregion will not matching src fs uri passed from client.
 any suggestion what is the best approaching to fix this issue? 
 I kind of think that we could check for default port if src uri come without 
 port info.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10018) Change the location prefetch

2014-02-26 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10018:


Attachment: 10018.v1.patch

 Change the location prefetch
 

 Key: HBASE-10018
 URL: https://issues.apache.org/jira/browse/HBASE-10018
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Minor
 Fix For: 0.98.1, 0.99.0

 Attachments: 10018.v1.patch


 Issues with prefetching are:
 - we do two calls to meta: one for the exact row, one for the prefetch 
 - it's done in a lock
 - we take the next 10 regions. Why 10, why the 10 next?
 - is it useful if the table has 100K regions?
 Options are:
 - just remove it
 - replace it with a reverse scan: this would save a call
  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10018) Change the location prefetch

2014-02-26 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13912990#comment-13912990
 ] 

Nicolas Liochon commented on HBASE-10018:
-

v1:
 - remove the lock
 - does a small reversed scan instead of a getRowSomething and a scan
 - deprecates all the prefetch methods, keep the interfaces but the methods do 
nothing
 - when we were doing a locate *without* the cache, we were also removing the 
entry from the cache. We don't do that anymore.

 Change the location prefetch
 

 Key: HBASE-10018
 URL: https://issues.apache.org/jira/browse/HBASE-10018
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Minor
 Fix For: 0.98.1, 0.99.0

 Attachments: 10018.v1.patch


 Issues with prefetching are:
 - we do two calls to meta: one for the exact row, one for the prefetch 
 - it's done in a lock
 - we take the next 10 regions. Why 10, why the 10 next?
 - is it useful if the table has 100K regions?
 Options are:
 - just remove it
 - replace it with a reverse scan: this would save a call
  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10618) User should not be allowed to disable/drop visibility labels table

2014-02-26 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10618:
---

Attachment: HBASE-10618.patch

 User should not be allowed to disable/drop visibility labels table
 --

 Key: HBASE-10618
 URL: https://issues.apache.org/jira/browse/HBASE-10618
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.98.1, 0.99.0

 Attachments: HBASE-10618.patch


 Deny all DDL operation on labels table like add/delete cf etc.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10451) Enable back Tag compression on HFiles

2014-02-26 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10451:
---

Attachment: (was: HBASE-10451_V6.patch)

 Enable back Tag compression on HFiles
 -

 Key: HBASE-10451
 URL: https://issues.apache.org/jira/browse/HBASE-10451
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 0.98.1, 0.99.0

 Attachments: HBASE-10451.patch, HBASE-10451_V2.patch, 
 HBASE-10451_V3.patch, HBASE-10451_V4.patch, HBASE-10451_V5.patch


 HBASE-10443 disables tag compression on HFiles. This Jira is to fix the 
 issues we have found out in HBASE-10443 and enable it back.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10451) Enable back Tag compression on HFiles

2014-02-26 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10451:
---

Status: Open  (was: Patch Available)

 Enable back Tag compression on HFiles
 -

 Key: HBASE-10451
 URL: https://issues.apache.org/jira/browse/HBASE-10451
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 0.98.1, 0.99.0

 Attachments: HBASE-10451.patch, HBASE-10451_V2.patch, 
 HBASE-10451_V3.patch, HBASE-10451_V4.patch, HBASE-10451_V5.patch


 HBASE-10443 disables tag compression on HFiles. This Jira is to fix the 
 issues we have found out in HBASE-10443 and enable it back.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10618) User should not be allowed to disable/drop visibility labels table

2014-02-26 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10618:
---

Status: Patch Available  (was: Open)

 User should not be allowed to disable/drop visibility labels table
 --

 Key: HBASE-10618
 URL: https://issues.apache.org/jira/browse/HBASE-10618
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.98.1, 0.99.0

 Attachments: HBASE-10618.patch


 Deny all DDL operation on labels table like add/delete cf etc.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10451) Enable back Tag compression on HFiles

2014-02-26 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10451:
---

Attachment: HBASE-10451_V6.patch

Reattaching V6 patch for a last time. Let us see what QA says now.

 Enable back Tag compression on HFiles
 -

 Key: HBASE-10451
 URL: https://issues.apache.org/jira/browse/HBASE-10451
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 0.98.1, 0.99.0

 Attachments: HBASE-10451.patch, HBASE-10451_V2.patch, 
 HBASE-10451_V3.patch, HBASE-10451_V4.patch, HBASE-10451_V5.patch, 
 HBASE-10451_V6.patch


 HBASE-10443 disables tag compression on HFiles. This Jira is to fix the 
 issues we have found out in HBASE-10443 and enable it back.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10355) Failover RPC's from client using region replicas

2014-02-26 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13912993#comment-13912993
 ] 

Nicolas Liochon commented on HBASE-10355:
-

I've also added a patch for HBASE-10018. Hopefully, this will lower the 
pressure on .meta.

 Failover RPC's from client using region replicas
 

 Key: HBASE-10355
 URL: https://issues.apache.org/jira/browse/HBASE-10355
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Reporter: Enis Soztutar
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10355.v1.patch, 10355.v2.patch, 10355.v3.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10451) Enable back Tag compression on HFiles

2014-02-26 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10451:
---

Status: Patch Available  (was: Open)

 Enable back Tag compression on HFiles
 -

 Key: HBASE-10451
 URL: https://issues.apache.org/jira/browse/HBASE-10451
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 0.98.1, 0.99.0

 Attachments: HBASE-10451.patch, HBASE-10451_V2.patch, 
 HBASE-10451_V3.patch, HBASE-10451_V4.patch, HBASE-10451_V5.patch, 
 HBASE-10451_V6.patch


 HBASE-10443 disables tag compression on HFiles. This Jira is to fix the 
 issues we have found out in HBASE-10443 and enable it back.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10600) HTable#batch() should perform validation on empty Put

2014-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913013#comment-13913013
 ] 

Hudson commented on HBASE-10600:


FAILURE: Integrated in HBase-TRUNK #4957 (See 
[https://builds.apache.org/job/HBase-TRUNK/4957/])
HBASE-10600 HTable#batch() should perform validation on empty Put (tedyu: rev 
1571899)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide3.java


 HTable#batch() should perform validation on empty Put
 -

 Key: HBASE-10600
 URL: https://issues.apache.org/jira/browse/HBASE-10600
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Fix For: 0.98.1, 0.99.0

 Attachments: 10600-v1.txt, 10600-v2.txt, 10600-v3.txt, 10600-v4.txt


 Raised by java8964 in this thread:
 http://osdir.com/ml/general/2014-02/msg44384.html
 When an empty Put is passed in the List to HTable#batch(), there is no 
 validation performed whereas IllegalArgumentException would have been thrown 
 if this empty Put in the simple Put API call.
 Validation on empty Put should be carried out in HTable#batch().



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10614) Master could not be stopped

2014-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913014#comment-13913014
 ] 

Hudson commented on HBASE-10614:


FAILURE: Integrated in HBase-TRUNK #4957 (See 
[https://builds.apache.org/job/HBase-TRUNK/4957/])
HBASE-10614 Master could not be stopped (Jingcheng Du) (stack: rev 1571915)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/catalog/MetaReader.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/ServerShutdownHandler.java


 Master could not be stopped
 ---

 Key: HBASE-10614
 URL: https://issues.apache.org/jira/browse/HBASE-10614
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.16, 0.99.0
Reporter: Jingcheng Du
Assignee: Jingcheng Du
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-10614-0.94.patch, HBASE-10614.patch


  It's an issue when to run bin/hbase master stop to shutdown the cluster.
  This could be reproduced by the following steps. Particularly for the trunk 
 code, we need to configure the hbase.assignment.maximum.attempts as 1.
 1. Start one master and several region servers.
 2. Stop all the region servers.
 3. After a while, run bin/hbase master stop to shutdown the cluster.
  As a result, the master could not be stopped within a short time, but will 
 be stopped after several hours. And after it's stopped, i find the error logs.
 1. For the trunk:
   A. lots of the logs which are java.io.IOException: Failed to find 
 location, tableName=hbase:meta, row=, reload=true
   B..And at last, there's one exception before the master is stopped, 
 ServerShutdownHandler: Received exception accessing hbase:meta during server 
 shutdown of server-XXX, retrying hbase:meta read
 java.io.InterruptedIOException: Interrupted after 0 tries  on 350.
 2. For the branch 0.94: 
   A. lots of the logs which are Looked up root region location, 
 connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@44285d14;
  serverName=.
   B. And at last, there's one exception before the master is stopped, 
 ServerShutdownHandler: Received exception accessing META during server 
 shutdown of server-XXX, retrying META read
 org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to find 
 region for  after 140 tries.
  We could see the master are stopped after lots of reties which are not 
 necessary when the cluster is shutdown.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10566) cleanup rpcTimeout in the client

2014-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913012#comment-13913012
 ] 

Hudson commented on HBASE-10566:


FAILURE: Integrated in HBase-TRUNK #4957 (See 
[https://builds.apache.org/job/HBase-TRUNK/4957/])
HBASE-10566 cleanup rpcTimeout in the client - addendum (nkeywal: rev 1572033)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClient.java


 cleanup rpcTimeout in the client
 

 Key: HBASE-10566
 URL: https://issues.apache.org/jira/browse/HBASE-10566
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10566.sample.patch, 10566.v1.patch, 10566.v2.patch, 
 10566.v3.patch


 There are two issues:
 1) A confusion between the socket timeout and the call timeout
 Socket timeouts should be minimal: a default like 20 seconds, that could be 
 lowered to single digits timeouts for some apps: if we can not write to the 
 socket in 10 second, we have an issue. This is different from the total 
 duration (send query + do query + receive query), that can be longer, as it 
 can include remotes calls on the server and so on. Today, we have a single 
 value, it does not allow us to have low socket read timeouts.
 2) The timeout can be different between the calls. Typically, if the total 
 time, retries included is 60 seconds but failed after 2 seconds, then the 
 remaining is 58s. HBase does this today, but by hacking with a thread local 
 storage variable. It's a hack (it should have been a parameter of the 
 methods, the TLS allowed to bypass all the layers. May be protobuf makes this 
 complicated, to be confirmed), but as well it does not really work, because 
 we can have multithreading issues (we use the updated rpc timeout of someone 
 else, or we create a new BlockingRpcChannelImplementation with a random 
 default timeout).
 Ideally, we could send the call timeout to the server as well: it will be 
 able to dismiss alone the calls that it received but git stick in the request 
 queue or in the internal retries (on hdfs for example).
 This will make the system more reactive to failure.
 I think we can solve this now, especially after 10525. The main issue is to 
 something that fits well with protobuf...
 Then it should be easy to have a pool of thread for writers and readers, w/o 
 a single thread per region server as today. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-8894) Forward port compressed l2 cache from 0.89fb

2014-02-26 Thread Sudarshan Kadambi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913015#comment-13913015
 ] 

Sudarshan Kadambi commented on HBASE-8894:
--

Liang - Are you still running performance tests to see if storing compressed 
blocks in the L2 cache has any benefits? What are the next steps for 
integrating this into the mainline code?

 Forward port compressed l2 cache from 0.89fb
 

 Key: HBASE-8894
 URL: https://issues.apache.org/jira/browse/HBASE-8894
 Project: HBase
  Issue Type: New Feature
Reporter: stack
Assignee: Liang Xie
Priority: Critical
 Attachments: HBASE-8894-0.94-v1.txt, HBASE-8894-0.94-v2.txt


 Forward port Alex's improvement on hbase-7407 from 0.89-fb branch:
 {code}
   1 r1492797 | liyin | 2013-06-13 11:18:20 -0700 (Thu, 13 Jun 2013) | 43 lines
   2
   3 [master] Implements a secondary compressed cache (L2 cache)
   4
   5 Author: avf
   6
   7 Summary:
   8 This revision implements compressed and encoded second-level cache with 
 off-heap
   9 (and optionally on-heap) storage and a bucket-allocator based on 
 HBASE-7404.
  10
  11 BucketCache from HBASE-7404 is extensively modified to:
  12
  13 * Only handle byte arrays (i.e., no more serialization/deserialization 
 within)
  14 * Remove persistence support for the time being
  15 * Keep an  index of hfilename to blocks for efficient eviction on close
  16
  17 A new interface (L2Cache) is introduced in order to separate it from the 
 current
  18 implementation. The L2 cache is then integrated into the classes that 
 handle
  19 reading from and writing to HFiles to allow cache-on-write as well as
  20 cache-on-read. Metrics for the L2 cache are integrated into 
 RegionServerMetrics
  21 much in the same fashion as metrics for the existing (L2) BlockCache.
  22
  23 Additionally, CacheConfig class is re-refactored to configure the L2 
 cache,
  24 replace multile constructors with a Builder, as well as replace static 
 methods
  25 for instantiating the caches with abstract factories (with singleton
  26 implementations for both the existing LruBlockCache and the newly 
 introduced
  27 BucketCache based L2 cache)
  28
  29 Test Plan:
  30 1) Additional unit tests
  31 2) Stress test on a single devserver
  32 3) Test on a single-node in shadow cluster
  33 4) Test on a whole shadow cluster
  34
  35 Revert Plan:
  36
  37 Reviewers: liyintang, aaiyer, rshroff, manukranthk, adela
  38
  39 Reviewed By: liyintang
  40
  41 CC: gqchen, hbase-eng@
  42
  43 Differential Revision: https://phabricator.fb.com/D837264
  44
  45 Task ID: 2325295
  7 
   6 r1492340 | liyin | 2013-06-12 11:36:03 -0700 (Wed, 12 Jun 2013) | 21 lines
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10618) User should not be allowed to disable/drop visibility labels table

2014-02-26 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913034#comment-13913034
 ] 

Ted Yu commented on HBASE-10618:


lgtm

 User should not be allowed to disable/drop visibility labels table
 --

 Key: HBASE-10618
 URL: https://issues.apache.org/jira/browse/HBASE-10618
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.98.1, 0.99.0

 Attachments: HBASE-10618.patch


 Deny all DDL operation on labels table like add/delete cf etc.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10606) Bad timeout in RpcRetryingCaller#callWithRetries w/o parameters

2014-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913052#comment-13913052
 ] 

Hadoop QA commented on HBASE-10606:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12631217/10606.v4.patch
  against trunk revision .
  ATTACHMENT ID: 12631217

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 12 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8813//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8813//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8813//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8813//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8813//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8813//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8813//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8813//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8813//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8813//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8813//console

This message is automatically generated.

 Bad timeout in RpcRetryingCaller#callWithRetries w/o parameters
 ---

 Key: HBASE-10606
 URL: https://issues.apache.org/jira/browse/HBASE-10606
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10606.v1.patch, 10606.v2.patch, 10606.v4.patch


 When we call this method w/o parameters, we don't take into account the 
 configuration, but use the hardcoded default (Integer.MAX).
 If someone was relying on having an infinite timeout whatever the setting, 
 fixing this bug will cause him a surprise. But there is no magic...



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10618) User should not be allowed to disable/drop visibility labels table

2014-02-26 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913075#comment-13913075
 ] 

ramkrishna.s.vasudevan commented on HBASE-10618:


+1

 User should not be allowed to disable/drop visibility labels table
 --

 Key: HBASE-10618
 URL: https://issues.apache.org/jira/browse/HBASE-10618
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.98.1, 0.99.0

 Attachments: HBASE-10618.patch


 Deny all DDL operation on labels table like add/delete cf etc.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10606) Bad timeout in RpcRetryingCaller#callWithRetries w/o parameters

2014-02-26 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10606:


  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed v4 to trunk, thanks for the reviews, Nick  Stack.

 Bad timeout in RpcRetryingCaller#callWithRetries w/o parameters
 ---

 Key: HBASE-10606
 URL: https://issues.apache.org/jira/browse/HBASE-10606
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10606.v1.patch, 10606.v2.patch, 10606.v4.patch


 When we call this method w/o parameters, we don't take into account the 
 configuration, but use the hardcoded default (Integer.MAX).
 If someone was relying on having an infinite timeout whatever the setting, 
 fixing this bug will cause him a surprise. But there is no magic...



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10018) Change the location prefetch

2014-02-26 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10018:


Status: Patch Available  (was: Open)

 Change the location prefetch
 

 Key: HBASE-10018
 URL: https://issues.apache.org/jira/browse/HBASE-10018
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0, 0.98.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Minor
 Fix For: 0.98.1, 0.99.0

 Attachments: 10018.v1.patch


 Issues with prefetching are:
 - we do two calls to meta: one for the exact row, one for the prefetch 
 - it's done in a lock
 - we take the next 10 regions. Why 10, why the 10 next?
 - is it useful if the table has 100K regions?
 Options are:
 - just remove it
 - replace it with a reverse scan: this would save a call
  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-8803) region_mover.rb should move multiple regions at a time

2014-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913101#comment-13913101
 ] 

Hadoop QA commented on HBASE-8803:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12631221/HBASE-8803-v9-trunk.patch
  against trunk revision .
  ATTACHMENT ID: 12631221

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the trunk's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  opts.on('-m', '--maxthreads=XX', 'Define the maximum number of threads 
to use to unload and reload the regions') do |number|
+HBASE_NOEXEC=true $bin/hbase --config ${HBASE_CONF_DIR} org.jruby.Main 
$bin/region_mover.rb --file=$filename $debug --maxthreads=$maxthreads unload 
$hostname
+HBASE_NOEXEC=true $bin/hbase --config ${HBASE_CONF_DIR} org.jruby.Main 
$bin/region_mover.rb --file=$filename $debug --maxthreads=$maxthreads load 
$hostname
+usage=Usage: $0 [--config hbase-confdir] [--rs-only] [--master-only] 
[--graceful] [--maxthreads xx]
+distMode=`HBASE_CONF_DIR=${HBASE_CONF_DIR} $bin/hbase 
org.apache.hadoop.hbase.util.HBaseConfTool hbase.cluster.distributed | head -n 
1`
+$bin/graceful_stop.sh --config ${HBASE_CONF_DIR} --restart 
--reload --debug --maxthreads ${RR_MAXTHREADS} $hostname

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8814//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8814//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8814//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8814//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8814//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8814//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8814//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8814//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8814//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8814//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8814//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8814//console

This message is automatically generated.

 region_mover.rb should move multiple regions at a time
 --

 Key: HBASE-8803
 URL: https://issues.apache.org/jira/browse/HBASE-8803
 Project: HBase
  Issue Type: Bug
  Components: Usability
Affects Versions: 0.94.16, 0.98.1, 0.96.1.1
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: 8803v5.txt, HBASE-8803-v0-trunk.patch, 
 HBASE-8803-v1-0.94.patch, HBASE-8803-v1-trunk.patch, 
 HBASE-8803-v2-0.94.patch, HBASE-8803-v2-0.94.patch, HBASE-8803-v3-0.94.patch, 
 HBASE-8803-v4-0.94.patch, HBASE-8803-v4-trunk.patch, 
 HBASE-8803-v5-0.94.patch, 

[jira] [Commented] (HBASE-9778) Avoid seeking to next column in ExplicitColumnTracker when possible

2014-02-26 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913119#comment-13913119
 ] 

Lars Hofhansl commented on HBASE-9778:
--

Some further observations.

When we reseek for a column we pass a KV that would be located just before the 
first KV for that column, in the various scanners, we then seek forward in the 
file until we're *past* the KV passed in, then we go back one KV discarding the 
current KV. So when we seek forward through the a file we'll scan every KV 
twice.

I'm planning to test passing a special KV so that in the scanners can tell when 
we're *on* the KV we're looking for. For example when looking for column we can 
scan forward until we see the first KV for that row, fam, col, and then we can 
stop. No need to need to scan one more, remember the previous, and then go 
back. For cases with few versions/columns that should shave off a large portion 
of the time. Will report back.


 Avoid seeking to next column in ExplicitColumnTracker when possible
 ---

 Key: HBASE-9778
 URL: https://issues.apache.org/jira/browse/HBASE-9778
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Attachments: 9778-0.94-v2.txt, 9778-0.94-v3.txt, 9778-0.94-v4.txt, 
 9778-0.94.txt, 9778-trunk-v2.txt, 9778-trunk-v3.txt, 9778-trunk.txt


 The issue of slow seeking in ExplicitColumnTracker was brought up by 
 [~vrodionov] on the dev list.
 My idea here is to avoid the seeking if we know that there aren't many 
 versions to skip.
 How do we know? We'll use the column family's VERSIONS setting as a hint. If 
 VERSIONS is set to 1 (or maybe some value  10) we'll avoid the seek and call 
 SKIP repeatedly.
 HBASE-9769 has some initial number for this approach:
 Interestingly it depends on which column(s) is (are) selected.
 Some numbers: 4m rows, 5 cols each, 1 cf, 10 bytes values, VERSIONS=1, 
 everything filtered at the server with a ValueFilter. Everything measured in 
 seconds.
 Without patch:
 ||Wildcard||Col 1||Col 2||Col 4||Col 5||Col 2+4||
 |6.4|8.5|14.3|14.6|11.1|20.3|
 With patch:
 ||Wildcard||Col 1||Col 2||Col 4||Col 5||Col 2+4||
 |6.4|8.4|8.9|9.9|6.4|10.0|
 Variation here was +- 0.2s.
 So with this patch scanning is 2x faster than without in some cases, and 
 never slower. No special hint needed, beyond declaring VERSIONS correctly.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10592) Refactor PerformanceEvaluation tool

2014-02-26 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913128#comment-13913128
 ] 

Jean-Marc Spaggiari commented on HBASE-10592:
-

Let me know when you think it will be ready for me to give it a try...

 Refactor PerformanceEvaluation tool
 ---

 Key: HBASE-10592
 URL: https://issues.apache.org/jira/browse/HBASE-10592
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 0.96.2, 0.98.1, 0.99.0
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
Priority: Minor
 Fix For: 0.99.0

 Attachments: HBASE-10592.00-0.96.patch, HBASE-10592.00-0.98.patch, 
 HBASE-10592.00.patch, HBASE-10592.01-0.96.patch, HBASE-10592.01-0.98.patch, 
 HBASE-10592.01.patch


 PerfEval is kind of a mess. It's painful to add new features because the test 
 options are itemized and passed as parameters to internal methods. 
 Serialization is hand-rolled and tedious. Ensuring support for mapreduce mode 
 is a chore because of it.
 This patch refactors the tool. Options are now passed around to methods and 
 such as a POJO instead of one-by-one. Get rid of accessors that don't help 
 anyone. On the mapreduce side, serialization is now handled using json 
 (jackson is a dependency anyway) instead of the hand-rolled regex we used 
 before. Also do away with custom InputSplit and FileFormat, instead using 
 Text and NLineInputFormat. On the local mode side, combine 1 client and N 
 clients into the same implementation. That implementation now uses an 
 ExecutorService, so we can later decouple number of client workers from 
 number of client tasks. Finally, drop a bunch of confusing local state, 
 instead use the new TestOptions POJO as a parameter to static methods.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10603) Deprecate RegionSplitter CLI tool

2014-02-26 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913141#comment-13913141
 ] 

Jean-Marc Spaggiari commented on HBASE-10603:
-

If you remove the main, can you please make sure it's documented somewhere how 
to achieve the same thing with the shell, in case someone is still using it?

 Deprecate RegionSplitter CLI tool
 -

 Key: HBASE-10603
 URL: https://issues.apache.org/jira/browse/HBASE-10603
 Project: HBase
  Issue Type: Improvement
  Components: util
Reporter: Nick Dimiduk
Priority: Minor
  Labels: noob
 Fix For: 0.99.0


 RegionSplitter is a utility for partitioning a table based on some split 
 algorithm. Those same algorithms are exposed via the shell create command. 
 There's no value in having two ways to access the same functionality. Ensure 
 the main method doesn't provide any functionality absent from the shell and 
 remove it.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9953) PerformanceEvaluation: Decouple data size from client concurrency

2014-02-26 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913151#comment-13913151
 ] 

Jean-Marc Spaggiari commented on HBASE-9953:


2 comments.

1) You might want to update printUsage too to provide usage.
2) If both  (opts.size == DEFAULT_OPTS.size) and  (opts.perClientRunRows == 
DEFAULT_OPTS.perClientRunRows)  then opts.totalRows is never set?

 PerformanceEvaluation: Decouple data size from client concurrency
 -

 Key: HBASE-9953
 URL: https://issues.apache.org/jira/browse/HBASE-9953
 Project: HBase
  Issue Type: Test
  Components: test
Reporter: Nick Dimiduk
Priority: Minor
 Attachments: HBASE-9953.00.patch


 PerfEval tool provides a {{--rows=R}} for specifying the number of records to 
 work with and requires the user provide a value of N, used as the concurrency 
 level. From what I can tell, every concurrent process will interact with R 
 rows. In order to perform an apples-to-apples test, the user must 
 re-calculate the value R for every new value of N.
 Instead, I propose accepting a {{--size=S}} for the amount of data to 
 interact with and let PerfEval divide that amongst the N clients on the 
 user's behalf.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-9914) Port fix for HBASE-9836 'Intermittent TestRegionObserverScannerOpenHook#testRegionObserverCompactionTimeStacking failure' to 0.94

2014-02-26 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9914:
--

Fix Version/s: 0.94.18

 Port fix for HBASE-9836 'Intermittent 
 TestRegionObserverScannerOpenHook#testRegionObserverCompactionTimeStacking 
 failure' to 0.94
 -

 Key: HBASE-9914
 URL: https://issues.apache.org/jira/browse/HBASE-9914
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: takeshi.miao
  Labels: noob
 Fix For: 0.94.18

 Attachments: HBASE-9914-0.94.17-v01.patch


 According to this thread: http://search-hadoop.com/m/3CzC31BQsDd , 
 TestRegionObserverScannerOpenHook#testRegionObserverCompactionTimeStacking 
 sometimes failed.
 This issue is to port the fix from HBASE-9836 to 0.94



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-9914) Port fix for HBASE-9836 'Intermittent TestRegionObserverScannerOpenHook#testRegionObserverCompactionTimeStacking failure' to 0.94

2014-02-26 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9914:
--

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks for the patch, Takeshi.

 Port fix for HBASE-9836 'Intermittent 
 TestRegionObserverScannerOpenHook#testRegionObserverCompactionTimeStacking 
 failure' to 0.94
 -

 Key: HBASE-9914
 URL: https://issues.apache.org/jira/browse/HBASE-9914
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: takeshi.miao
  Labels: noob
 Fix For: 0.94.18

 Attachments: HBASE-9914-0.94.17-v01.patch


 According to this thread: http://search-hadoop.com/m/3CzC31BQsDd , 
 TestRegionObserverScannerOpenHook#testRegionObserverCompactionTimeStacking 
 sometimes failed.
 This issue is to port the fix from HBASE-9836 to 0.94



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10599) Replace System.currentMillis() with EnvironmentEdge.currentTimeMillis in memstore flusher and related places

2014-02-26 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10599:
---

Attachment: HBASE-10599.patch

 Replace System.currentMillis() with EnvironmentEdge.currentTimeMillis in 
 memstore flusher and related places
 

 Key: HBASE-10599
 URL: https://issues.apache.org/jira/browse/HBASE-10599
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.96.2, 0.98.1, 0.99.0

 Attachments: HBASE-10599.patch


 Memstoreflusher still uses System.currentMillis.  Better to replace it with 
 EnvironmentEdge.currentMillis(),



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10599) Replace System.currentMillis() with EnvironmentEdge.currentTimeMillis in memstore flusher and related places

2014-02-26 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10599:
---

Status: Patch Available  (was: Open)

 Replace System.currentMillis() with EnvironmentEdge.currentTimeMillis in 
 memstore flusher and related places
 

 Key: HBASE-10599
 URL: https://issues.apache.org/jira/browse/HBASE-10599
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.96.1.1, 0.98.0, 0.99.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.96.2, 0.98.1, 0.99.0

 Attachments: HBASE-10599.patch


 Memstoreflusher still uses System.currentMillis.  Better to replace it with 
 EnvironmentEdge.currentMillis(),



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10531) Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo

2014-02-26 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913181#comment-13913181
 ] 

ramkrishna.s.vasudevan commented on HBASE-10531:


Will come up with a WIP patch tomorrow.  Some test case failures are there 
while using reseek with MetaComparator.

 Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo
 

 Key: HBASE-10531
 URL: https://issues.apache.org/jira/browse/HBASE-10531
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0

 Attachments: HBASE-10531.patch


 Currently the byte[] key passed to HFileScanner.seekTo and 
 HFileScanner.reseekTo, is a combination of row, cf, qual, type and ts.  And 
 the caller forms this by using kv.getBuffer, which is actually deprecated.  
 So see how this can be achieved considering kv.getBuffer is removed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-9990) HTable uses the conf for each newCaller

2014-02-26 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9990:
---

Attachment: 9990.v2.patch

 HTable uses the conf for each newCaller
 -

 Key: HBASE-9990
 URL: https://issues.apache.org/jira/browse/HBASE-9990
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Attachments: 9990.v1.patch, 9990.v2.patch


 You can construct a RpcRetryingCallerFactory, but actually the conf is read 
 for each caller creation. Reading the conf is obviously expensive, and a 
 profiling session shows it. If we want to sent hundreds of thousands of 
 queries per second, we should not do that.
 RpcRetryingCallerFactory.newCaller is called for each get, for example.
 This is not a regression, we have something similar in 0.94.
 On the 0.96, we see the creation of: java.util.regex.Matcher: 15739712b after 
 a few thousand calls to get.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-9990) HTable uses the conf for each newCaller

2014-02-26 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9990:
---

Status: Patch Available  (was: Open)

 HTable uses the conf for each newCaller
 -

 Key: HBASE-9990
 URL: https://issues.apache.org/jira/browse/HBASE-9990
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.96.0, 0.98.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Attachments: 9990.v1.patch, 9990.v2.patch


 You can construct a RpcRetryingCallerFactory, but actually the conf is read 
 for each caller creation. Reading the conf is obviously expensive, and a 
 profiling session shows it. If we want to sent hundreds of thousands of 
 queries per second, we should not do that.
 RpcRetryingCallerFactory.newCaller is called for each get, for example.
 This is not a regression, we have something similar in 0.94.
 On the 0.96, we see the creation of: java.util.regex.Matcher: 15739712b after 
 a few thousand calls to get.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-9990) HTable uses the conf for each newCaller

2014-02-26 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9990:
---

Fix Version/s: 0.99.0

 HTable uses the conf for each newCaller
 -

 Key: HBASE-9990
 URL: https://issues.apache.org/jira/browse/HBASE-9990
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 9990.v1.patch, 9990.v2.patch


 You can construct a RpcRetryingCallerFactory, but actually the conf is read 
 for each caller creation. Reading the conf is obviously expensive, and a 
 profiling session shows it. If we want to sent hundreds of thousands of 
 queries per second, we should not do that.
 RpcRetryingCallerFactory.newCaller is called for each get, for example.
 This is not a regression, we have something similar in 0.94.
 On the 0.96, we see the creation of: java.util.regex.Matcher: 15739712b after 
 a few thousand calls to get.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9990) HTable uses the conf for each newCaller

2014-02-26 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913188#comment-13913188
 ] 

Nicolas Liochon commented on HBASE-9990:


v2: much simpler.

 HTable uses the conf for each newCaller
 -

 Key: HBASE-9990
 URL: https://issues.apache.org/jira/browse/HBASE-9990
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 9990.v1.patch, 9990.v2.patch


 You can construct a RpcRetryingCallerFactory, but actually the conf is read 
 for each caller creation. Reading the conf is obviously expensive, and a 
 profiling session shows it. If we want to sent hundreds of thousands of 
 queries per second, we should not do that.
 RpcRetryingCallerFactory.newCaller is called for each get, for example.
 This is not a regression, we have something similar in 0.94.
 On the 0.96, we see the creation of: java.util.regex.Matcher: 15739712b after 
 a few thousand calls to get.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10612) Remove unnecessary dependency on org.eclipse.jdt:core

2014-02-26 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10612:
---

Attachment: 10612-v2.txt

How about patch v2 ?

 Remove unnecessary dependency on org.eclipse.jdt:core
 -

 Key: HBASE-10612
 URL: https://issues.apache.org/jira/browse/HBASE-10612
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.99.0

 Attachments: 10612-v1.txt, 10612-v2.txt


 Transitive dependency on org.eclipse.jdt:core comes from 
 org.mortbay.jetty:jsp-2.1
 This dependency is not needed.
 Removing it would get rid of the following jar under lib:
 {code}
 -rw-r--r-- 1 root root 3566844 Feb 20 20:50 core-3.1.1.jar
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10080) Unnecessary call to locateRegion when creating an HTable instance

2014-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913221#comment-13913221
 ] 

Hadoop QA commented on HBASE-10080:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12631222/10080.v2.patch
  against trunk revision .
  ATTACHMENT ID: 12631222

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestShell

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8815//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8815//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8815//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8815//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8815//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8815//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8815//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8815//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8815//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8815//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8815//console

This message is automatically generated.

 Unnecessary call to locateRegion when creating an HTable instance
 -

 Key: HBASE-10080
 URL: https://issues.apache.org/jira/browse/HBASE-10080
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Trivial
 Fix For: 0.98.1

 Attachments: 10080.v1.patch, 10080.v2.patch


 It's more or less in contradiction with the objective of having lightweight 
 HTable objects and the data may be stale when we will use it 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10592) Refactor PerformanceEvaluation tool

2014-02-26 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913220#comment-13913220
 ] 

Nick Dimiduk commented on HBASE-10592:
--

Per my testing, it's ready. I was just about to commit. Want to take it for a 
spin before I do so, [~jmspaggi]?

 Refactor PerformanceEvaluation tool
 ---

 Key: HBASE-10592
 URL: https://issues.apache.org/jira/browse/HBASE-10592
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 0.96.2, 0.98.1, 0.99.0
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
Priority: Minor
 Fix For: 0.99.0

 Attachments: HBASE-10592.00-0.96.patch, HBASE-10592.00-0.98.patch, 
 HBASE-10592.00.patch, HBASE-10592.01-0.96.patch, HBASE-10592.01-0.98.patch, 
 HBASE-10592.01.patch


 PerfEval is kind of a mess. It's painful to add new features because the test 
 options are itemized and passed as parameters to internal methods. 
 Serialization is hand-rolled and tedious. Ensuring support for mapreduce mode 
 is a chore because of it.
 This patch refactors the tool. Options are now passed around to methods and 
 such as a POJO instead of one-by-one. Get rid of accessors that don't help 
 anyone. On the mapreduce side, serialization is now handled using json 
 (jackson is a dependency anyway) instead of the hand-rolled regex we used 
 before. Also do away with custom InputSplit and FileFormat, instead using 
 Text and NLineInputFormat. On the local mode side, combine 1 client and N 
 clients into the same implementation. That implementation now uses an 
 ExecutorService, so we can later decouple number of client workers from 
 number of client tasks. Finally, drop a bunch of confusing local state, 
 instead use the new TestOptions POJO as a parameter to static methods.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10592) Refactor PerformanceEvaluation tool

2014-02-26 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913227#comment-13913227
 ] 

Jean-Marc Spaggiari commented on HBASE-10592:
-

Will test it in 0.96 right now.

 Refactor PerformanceEvaluation tool
 ---

 Key: HBASE-10592
 URL: https://issues.apache.org/jira/browse/HBASE-10592
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 0.96.2, 0.98.1, 0.99.0
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
Priority: Minor
 Fix For: 0.99.0

 Attachments: HBASE-10592.00-0.96.patch, HBASE-10592.00-0.98.patch, 
 HBASE-10592.00.patch, HBASE-10592.01-0.96.patch, HBASE-10592.01-0.98.patch, 
 HBASE-10592.01.patch


 PerfEval is kind of a mess. It's painful to add new features because the test 
 options are itemized and passed as parameters to internal methods. 
 Serialization is hand-rolled and tedious. Ensuring support for mapreduce mode 
 is a chore because of it.
 This patch refactors the tool. Options are now passed around to methods and 
 such as a POJO instead of one-by-one. Get rid of accessors that don't help 
 anyone. On the mapreduce side, serialization is now handled using json 
 (jackson is a dependency anyway) instead of the hand-rolled regex we used 
 before. Also do away with custom InputSplit and FileFormat, instead using 
 Text and NLineInputFormat. On the local mode side, combine 1 client and N 
 clients into the same implementation. That implementation now uses an 
 ExecutorService, so we can later decouple number of client workers from 
 number of client tasks. Finally, drop a bunch of confusing local state, 
 instead use the new TestOptions POJO as a parameter to static methods.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10351) LoadBalancer changes for supporting region replicas

2014-02-26 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913229#comment-13913229
 ] 

Sergey Shelukhin commented on HBASE-10351:
--

some minor comments (in response to old comments and on v4). Mostly looks good

 LoadBalancer changes for supporting region replicas
 ---

 Key: HBASE-10351
 URL: https://issues.apache.org/jira/browse/HBASE-10351
 Project: HBase
  Issue Type: Sub-task
  Components: master
Affects Versions: 0.99.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Attachments: hbase-10351_v0.patch, hbase-10351_v1.patch, 
 hbase-10351_v3.patch, hbase-10351_v5.patch


 LoadBalancer has to be aware of and enforce placement of region replicas so 
 that the replicas are not co-hosted in the same server, host or rack. This 
 will ensure that the region is highly available during process / host / rack 
 failover. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10617) Value lost if $ element is before column element in json when posted to Rest Server

2014-02-26 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913236#comment-13913236
 ] 

Nick Dimiduk commented on HBASE-10617:
--

IIRC, this was fixed on 0.96 by switching serialization libraries.

 Value lost if $ element is before column element in json when posted to 
 Rest Server
 ---

 Key: HBASE-10617
 URL: https://issues.apache.org/jira/browse/HBASE-10617
 Project: HBase
  Issue Type: Bug
  Components: REST
Affects Versions: 0.94.11
Reporter: Liu Shaohui
Priority: Minor

 When post following json data to rest server, it return 200, but the value is 
 null in HBase
 {code}
 {Row: { key:cjI=, Cell: {$:ZGF0YTE=, column:ZjE6YzI=}}}
 {code}
 From rest server log, we found the length of value is null after the server 
 paste the json to RowModel object
 {code}
 14/02/26 17:52:14 DEBUG rest.RowResource: PUT 
 {totalColumns:1,families:{f1:[{timestamp:9223372036854775807,qualifier:c2,vlen:0}]},row:r2}
 {code}
 When the order is that column before $,  it works fine.
 {code}
 {Row: { key:cjI=, Cell: {column:ZjE6YzI=, $:ZGF0YTE= }}}
 {code}
 DIfferent json libs may have different order of this two elements even if 
 column is put before $.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9836) Intermittent TestRegionObserverScannerOpenHook#testRegionObserverCompactionTimeStacking failure

2014-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913247#comment-13913247
 ] 

Hudson commented on HBASE-9836:
---

FAILURE: Integrated in HBase-0.94-security #423 (See 
[https://builds.apache.org/job/HBase-0.94-security/423/])
HBASE-9914 Port fix for HBASE-9836 'Intermittent 
TestRegionObserverScannerOpenHook#testRegionObserverCompactionTimeStacking 
failure' to 0.94 (Takeshi Miao) (tedyu: rev 1572166)
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverScannerOpenHook.java


 Intermittent 
 TestRegionObserverScannerOpenHook#testRegionObserverCompactionTimeStacking 
 failure
 ---

 Key: HBASE-9836
 URL: https://issues.apache.org/jira/browse/HBASE-9836
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.98.0, 0.96.1

 Attachments: 9836-v1.txt, 9836-v3.txt, 9836-v4.txt, 9836-v5.txt, 
 9836-v6.txt


 Here were two recent examples:
 https://builds.apache.org/job/hbase-0.96-hadoop2/99/testReport/org.apache.hadoop.hbase.coprocessor/TestRegionObserverScannerOpenHook/testRegionObserverCompactionTimeStacking/
 https://builds.apache.org/job/PreCommit-HBASE-Build/7616/testReport/junit/org.apache.hadoop.hbase.coprocessor/TestRegionObserverScannerOpenHook/testRegionObserverCompactionTimeStacking/
 From the second:
 {code}
 2013-10-24 18:08:10,080 INFO  [Priority.RpcServer.handler=1,port=58174] 
 regionserver.HRegionServer(3672): Flushing 
 testRegionObserverCompactionTimeStacking,,1382638088230.e96920e43ea374ba1bd559df115870cf.
 ...
 2013-10-24 18:08:10,544 INFO  [Priority.RpcServer.handler=1,port=58174] 
 regionserver.HRegion(1645): Finished memstore flush of ~128.0/128, 
 currentsize=0.0/0 for region 
 testRegionObserverCompactionTimeStacking,,1382638088230.e96920e43ea374ba1bd559df115870cf.
  in 464ms, sequenceid=5, compaction requested=true
 2013-10-24 18:08:10,546 DEBUG [Priority.RpcServer.handler=1,port=58174] 
 regionserver.CompactSplitThread(319): Small Compaction requested: system; 
 Because: Compaction through user triggered flush; compaction_queue=(0:0), 
 split_queue=0, merge_queue=0
 2013-10-24 18:08:10,547 DEBUG 
 [RS:0;asf002:58174-smallCompactions-1382638090545] 
 compactions.RatioBasedCompactionPolicy(92): Selecting compaction from 2 store 
 files, 0 compacting, 2 eligible, 10 blocking
 2013-10-24 18:08:10,547 DEBUG [pool-1-thread-1] catalog.CatalogTracker(209): 
 Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@4be179
 2013-10-24 18:08:10,549 DEBUG 
 [RS:0;asf002:58174-smallCompactions-1382638090545] 
 compactions.ExploringCompactionPolicy(112): Exploring compaction algorithm 
 has selected 2 files of size 1999 starting at candidate #0 after considering 
 1 permutations with 1 in ratio
 2013-10-24 18:08:10,551 DEBUG 
 [RS:0;asf002:58174-smallCompactions-1382638090545] regionserver.HStore(1329): 
 e96920e43ea374ba1bd559df115870cf - A: Initiating major compaction
 2013-10-24 18:08:10,551 INFO  
 [RS:0;asf002:58174-smallCompactions-1382638090545] 
 regionserver.HRegion(1294): Starting compaction on A in region 
 testRegionObserverCompactionTimeStacking,,1382638088230.e96920e43ea374ba1bd559df115870cf.
 2013-10-24 18:08:10,551 INFO  
 [RS:0;asf002:58174-smallCompactions-1382638090545] regionserver.HStore(982): 
 Starting compaction of 2 file(s) in A of 
 testRegionObserverCompactionTimeStacking,,1382638088230.e96920e43ea374ba1bd559df115870cf.
  into 
 tmpdir=hdfs://localhost:49506/user/jenkins/hbase/data/default/testRegionObserverCompactionTimeStacking/e96920e43ea374ba1bd559df115870cf/.tmp,
  totalSize=2.0k
 2013-10-24 18:08:10,552 DEBUG 
 [RS:0;asf002:58174-smallCompactions-1382638090545] 
 compactions.Compactor(168): Compacting 
 hdfs://localhost:49506/user/jenkins/hbase/data/default/testRegionObserverCompactionTimeStacking/e96920e43ea374ba1bd559df115870cf/A/44f87b94732149c08f20bdba00dd7140,
  keycount=1, bloomtype=ROW, size=992.0, encoding=NONE, seqNum=3, 
 earliestPutTs=1382638089528
 2013-10-24 18:08:10,552 DEBUG 
 [RS:0;asf002:58174-smallCompactions-1382638090545] 
 compactions.Compactor(168): Compacting 
 hdfs://localhost:49506/user/jenkins/hbase/data/default/testRegionObserverCompactionTimeStacking/e96920e43ea374ba1bd559df115870cf/A/0b2e580cbda246718bbf64c21e81cd18,
  keycount=1, bloomtype=ROW, size=1007.0, encoding=NONE, seqNum=5, 
 earliestPutTs=1382638090053
 2013-10-24 18:08:10,564 DEBUG 
 [RS:0;asf002:58174-smallCompactions-1382638090545] util.FSUtils(305): DFS 
 Client does not support most favored nodes create; using default create
 ...
 Potentially hanging thread: RS:0;asf002:58174-smallCompactions-1382638090545
   java.lang.Object.wait(Native Method)
   java.lang.Object.wait(Object.java:485)
  

[jira] [Commented] (HBASE-9914) Port fix for HBASE-9836 'Intermittent TestRegionObserverScannerOpenHook#testRegionObserverCompactionTimeStacking failure' to 0.94

2014-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913248#comment-13913248
 ] 

Hudson commented on HBASE-9914:
---

FAILURE: Integrated in HBase-0.94-security #423 (See 
[https://builds.apache.org/job/HBase-0.94-security/423/])
HBASE-9914 Port fix for HBASE-9836 'Intermittent 
TestRegionObserverScannerOpenHook#testRegionObserverCompactionTimeStacking 
failure' to 0.94 (Takeshi Miao) (tedyu: rev 1572166)
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverScannerOpenHook.java


 Port fix for HBASE-9836 'Intermittent 
 TestRegionObserverScannerOpenHook#testRegionObserverCompactionTimeStacking 
 failure' to 0.94
 -

 Key: HBASE-9914
 URL: https://issues.apache.org/jira/browse/HBASE-9914
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: takeshi.miao
  Labels: noob
 Fix For: 0.94.18

 Attachments: HBASE-9914-0.94.17-v01.patch


 According to this thread: http://search-hadoop.com/m/3CzC31BQsDd , 
 TestRegionObserverScannerOpenHook#testRegionObserverCompactionTimeStacking 
 sometimes failed.
 This issue is to port the fix from HBASE-9836 to 0.94



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10451) Enable back Tag compression on HFiles

2014-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913256#comment-13913256
 ] 

Hadoop QA commented on HBASE-10451:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12631245/HBASE-10451_V6.patch
  against trunk revision .
  ATTACHMENT ID: 12631245

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestHCM

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8816//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8816//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8816//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8816//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8816//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8816//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8816//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8816//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8816//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8816//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8816//console

This message is automatically generated.

 Enable back Tag compression on HFiles
 -

 Key: HBASE-10451
 URL: https://issues.apache.org/jira/browse/HBASE-10451
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 0.98.1, 0.99.0

 Attachments: HBASE-10451.patch, HBASE-10451_V2.patch, 
 HBASE-10451_V3.patch, HBASE-10451_V4.patch, HBASE-10451_V5.patch, 
 HBASE-10451_V6.patch


 HBASE-10443 disables tag compression on HFiles. This Jira is to fix the 
 issues we have found out in HBASE-10443 and enable it back.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-8894) Forward port compressed l2 cache from 0.89fb

2014-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913265#comment-13913265
 ] 

Hadoop QA commented on HBASE-8894:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12607956/HBASE-8894-0.94-v2.txt
  against trunk revision .
  ATTACHMENT ID: 12607956

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 30 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8818//console

This message is automatically generated.

 Forward port compressed l2 cache from 0.89fb
 

 Key: HBASE-8894
 URL: https://issues.apache.org/jira/browse/HBASE-8894
 Project: HBase
  Issue Type: New Feature
Reporter: stack
Assignee: Liang Xie
Priority: Critical
 Attachments: HBASE-8894-0.94-v1.txt, HBASE-8894-0.94-v2.txt


 Forward port Alex's improvement on hbase-7407 from 0.89-fb branch:
 {code}
   1 r1492797 | liyin | 2013-06-13 11:18:20 -0700 (Thu, 13 Jun 2013) | 43 lines
   2
   3 [master] Implements a secondary compressed cache (L2 cache)
   4
   5 Author: avf
   6
   7 Summary:
   8 This revision implements compressed and encoded second-level cache with 
 off-heap
   9 (and optionally on-heap) storage and a bucket-allocator based on 
 HBASE-7404.
  10
  11 BucketCache from HBASE-7404 is extensively modified to:
  12
  13 * Only handle byte arrays (i.e., no more serialization/deserialization 
 within)
  14 * Remove persistence support for the time being
  15 * Keep an  index of hfilename to blocks for efficient eviction on close
  16
  17 A new interface (L2Cache) is introduced in order to separate it from the 
 current
  18 implementation. The L2 cache is then integrated into the classes that 
 handle
  19 reading from and writing to HFiles to allow cache-on-write as well as
  20 cache-on-read. Metrics for the L2 cache are integrated into 
 RegionServerMetrics
  21 much in the same fashion as metrics for the existing (L2) BlockCache.
  22
  23 Additionally, CacheConfig class is re-refactored to configure the L2 
 cache,
  24 replace multile constructors with a Builder, as well as replace static 
 methods
  25 for instantiating the caches with abstract factories (with singleton
  26 implementations for both the existing LruBlockCache and the newly 
 introduced
  27 BucketCache based L2 cache)
  28
  29 Test Plan:
  30 1) Additional unit tests
  31 2) Stress test on a single devserver
  32 3) Test on a single-node in shadow cluster
  33 4) Test on a whole shadow cluster
  34
  35 Revert Plan:
  36
  37 Reviewers: liyintang, aaiyer, rshroff, manukranthk, adela
  38
  39 Reviewed By: liyintang
  40
  41 CC: gqchen, hbase-eng@
  42
  43 Differential Revision: https://phabricator.fb.com/D837264
  44
  45 Task ID: 2325295
  7 
   6 r1492340 | liyin | 2013-06-12 11:36:03 -0700 (Wed, 12 Jun 2013) | 21 lines
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10018) Change the location prefetch

2014-02-26 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10018:


Status: Open  (was: Patch Available)

 Change the location prefetch
 

 Key: HBASE-10018
 URL: https://issues.apache.org/jira/browse/HBASE-10018
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0, 0.98.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Minor
 Fix For: 0.98.1, 0.99.0

 Attachments: 10018.v1.patch


 Issues with prefetching are:
 - we do two calls to meta: one for the exact row, one for the prefetch 
 - it's done in a lock
 - we take the next 10 regions. Why 10, why the 10 next?
 - is it useful if the table has 100K regions?
 Options are:
 - just remove it
 - replace it with a reverse scan: this would save a call
  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9836) Intermittent TestRegionObserverScannerOpenHook#testRegionObserverCompactionTimeStacking failure

2014-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913290#comment-13913290
 ] 

Hudson commented on HBASE-9836:
---

FAILURE: Integrated in HBase-0.94-on-Hadoop-2 #33 (See 
[https://builds.apache.org/job/HBase-0.94-on-Hadoop-2/33/])
HBASE-9914 Port fix for HBASE-9836 'Intermittent 
TestRegionObserverScannerOpenHook#testRegionObserverCompactionTimeStacking 
failure' to 0.94 (Takeshi Miao) (tedyu: rev 1572166)
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverScannerOpenHook.java


 Intermittent 
 TestRegionObserverScannerOpenHook#testRegionObserverCompactionTimeStacking 
 failure
 ---

 Key: HBASE-9836
 URL: https://issues.apache.org/jira/browse/HBASE-9836
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.98.0, 0.96.1

 Attachments: 9836-v1.txt, 9836-v3.txt, 9836-v4.txt, 9836-v5.txt, 
 9836-v6.txt


 Here were two recent examples:
 https://builds.apache.org/job/hbase-0.96-hadoop2/99/testReport/org.apache.hadoop.hbase.coprocessor/TestRegionObserverScannerOpenHook/testRegionObserverCompactionTimeStacking/
 https://builds.apache.org/job/PreCommit-HBASE-Build/7616/testReport/junit/org.apache.hadoop.hbase.coprocessor/TestRegionObserverScannerOpenHook/testRegionObserverCompactionTimeStacking/
 From the second:
 {code}
 2013-10-24 18:08:10,080 INFO  [Priority.RpcServer.handler=1,port=58174] 
 regionserver.HRegionServer(3672): Flushing 
 testRegionObserverCompactionTimeStacking,,1382638088230.e96920e43ea374ba1bd559df115870cf.
 ...
 2013-10-24 18:08:10,544 INFO  [Priority.RpcServer.handler=1,port=58174] 
 regionserver.HRegion(1645): Finished memstore flush of ~128.0/128, 
 currentsize=0.0/0 for region 
 testRegionObserverCompactionTimeStacking,,1382638088230.e96920e43ea374ba1bd559df115870cf.
  in 464ms, sequenceid=5, compaction requested=true
 2013-10-24 18:08:10,546 DEBUG [Priority.RpcServer.handler=1,port=58174] 
 regionserver.CompactSplitThread(319): Small Compaction requested: system; 
 Because: Compaction through user triggered flush; compaction_queue=(0:0), 
 split_queue=0, merge_queue=0
 2013-10-24 18:08:10,547 DEBUG 
 [RS:0;asf002:58174-smallCompactions-1382638090545] 
 compactions.RatioBasedCompactionPolicy(92): Selecting compaction from 2 store 
 files, 0 compacting, 2 eligible, 10 blocking
 2013-10-24 18:08:10,547 DEBUG [pool-1-thread-1] catalog.CatalogTracker(209): 
 Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@4be179
 2013-10-24 18:08:10,549 DEBUG 
 [RS:0;asf002:58174-smallCompactions-1382638090545] 
 compactions.ExploringCompactionPolicy(112): Exploring compaction algorithm 
 has selected 2 files of size 1999 starting at candidate #0 after considering 
 1 permutations with 1 in ratio
 2013-10-24 18:08:10,551 DEBUG 
 [RS:0;asf002:58174-smallCompactions-1382638090545] regionserver.HStore(1329): 
 e96920e43ea374ba1bd559df115870cf - A: Initiating major compaction
 2013-10-24 18:08:10,551 INFO  
 [RS:0;asf002:58174-smallCompactions-1382638090545] 
 regionserver.HRegion(1294): Starting compaction on A in region 
 testRegionObserverCompactionTimeStacking,,1382638088230.e96920e43ea374ba1bd559df115870cf.
 2013-10-24 18:08:10,551 INFO  
 [RS:0;asf002:58174-smallCompactions-1382638090545] regionserver.HStore(982): 
 Starting compaction of 2 file(s) in A of 
 testRegionObserverCompactionTimeStacking,,1382638088230.e96920e43ea374ba1bd559df115870cf.
  into 
 tmpdir=hdfs://localhost:49506/user/jenkins/hbase/data/default/testRegionObserverCompactionTimeStacking/e96920e43ea374ba1bd559df115870cf/.tmp,
  totalSize=2.0k
 2013-10-24 18:08:10,552 DEBUG 
 [RS:0;asf002:58174-smallCompactions-1382638090545] 
 compactions.Compactor(168): Compacting 
 hdfs://localhost:49506/user/jenkins/hbase/data/default/testRegionObserverCompactionTimeStacking/e96920e43ea374ba1bd559df115870cf/A/44f87b94732149c08f20bdba00dd7140,
  keycount=1, bloomtype=ROW, size=992.0, encoding=NONE, seqNum=3, 
 earliestPutTs=1382638089528
 2013-10-24 18:08:10,552 DEBUG 
 [RS:0;asf002:58174-smallCompactions-1382638090545] 
 compactions.Compactor(168): Compacting 
 hdfs://localhost:49506/user/jenkins/hbase/data/default/testRegionObserverCompactionTimeStacking/e96920e43ea374ba1bd559df115870cf/A/0b2e580cbda246718bbf64c21e81cd18,
  keycount=1, bloomtype=ROW, size=1007.0, encoding=NONE, seqNum=5, 
 earliestPutTs=1382638090053
 2013-10-24 18:08:10,564 DEBUG 
 [RS:0;asf002:58174-smallCompactions-1382638090545] util.FSUtils(305): DFS 
 Client does not support most favored nodes create; using default create
 ...
 Potentially hanging thread: RS:0;asf002:58174-smallCompactions-1382638090545
   java.lang.Object.wait(Native Method)
   

[jira] [Commented] (HBASE-10018) Change the location prefetch

2014-02-26 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913292#comment-13913292
 ] 

Nicolas Liochon commented on HBASE-10018:
-

v2: creates directly the scanner instead of going through a HTable creation, 
saving some subjects  configuration parsing.
However, I seems that the Reversed scan doesn't manage the 'small' status. If 
it's the case, it will worth fixing here.

 Change the location prefetch
 

 Key: HBASE-10018
 URL: https://issues.apache.org/jira/browse/HBASE-10018
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Minor
 Fix For: 0.98.1, 0.99.0

 Attachments: 10018.v1.patch


 Issues with prefetching are:
 - we do two calls to meta: one for the exact row, one for the prefetch 
 - it's done in a lock
 - we take the next 10 regions. Why 10, why the 10 next?
 - is it useful if the table has 100K regions?
 Options are:
 - just remove it
 - replace it with a reverse scan: this would save a call
  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9914) Port fix for HBASE-9836 'Intermittent TestRegionObserverScannerOpenHook#testRegionObserverCompactionTimeStacking failure' to 0.94

2014-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913291#comment-13913291
 ] 

Hudson commented on HBASE-9914:
---

FAILURE: Integrated in HBase-0.94-on-Hadoop-2 #33 (See 
[https://builds.apache.org/job/HBase-0.94-on-Hadoop-2/33/])
HBASE-9914 Port fix for HBASE-9836 'Intermittent 
TestRegionObserverScannerOpenHook#testRegionObserverCompactionTimeStacking 
failure' to 0.94 (Takeshi Miao) (tedyu: rev 1572166)
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverScannerOpenHook.java


 Port fix for HBASE-9836 'Intermittent 
 TestRegionObserverScannerOpenHook#testRegionObserverCompactionTimeStacking 
 failure' to 0.94
 -

 Key: HBASE-9914
 URL: https://issues.apache.org/jira/browse/HBASE-9914
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: takeshi.miao
  Labels: noob
 Fix For: 0.94.18

 Attachments: HBASE-9914-0.94.17-v01.patch


 According to this thread: http://search-hadoop.com/m/3CzC31BQsDd , 
 TestRegionObserverScannerOpenHook#testRegionObserverCompactionTimeStacking 
 sometimes failed.
 This issue is to port the fix from HBASE-9836 to 0.94



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10018) Change the location prefetch

2014-02-26 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10018:


Status: Patch Available  (was: Open)

 Change the location prefetch
 

 Key: HBASE-10018
 URL: https://issues.apache.org/jira/browse/HBASE-10018
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0, 0.98.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Minor
 Fix For: 0.98.1, 0.99.0

 Attachments: 10018.v1.patch, 10018.v2.patch


 Issues with prefetching are:
 - we do two calls to meta: one for the exact row, one for the prefetch 
 - it's done in a lock
 - we take the next 10 regions. Why 10, why the 10 next?
 - is it useful if the table has 100K regions?
 Options are:
 - just remove it
 - replace it with a reverse scan: this would save a call
  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10018) Change the location prefetch

2014-02-26 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10018:


Attachment: 10018.v2.patch

 Change the location prefetch
 

 Key: HBASE-10018
 URL: https://issues.apache.org/jira/browse/HBASE-10018
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Minor
 Fix For: 0.98.1, 0.99.0

 Attachments: 10018.v1.patch, 10018.v2.patch


 Issues with prefetching are:
 - we do two calls to meta: one for the exact row, one for the prefetch 
 - it's done in a lock
 - we take the next 10 regions. Why 10, why the 10 next?
 - is it useful if the table has 100K regions?
 Options are:
 - just remove it
 - replace it with a reverse scan: this would save a call
  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10080) Unnecessary call to locateRegion when creating an HTable instance

2014-02-26 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913305#comment-13913305
 ] 

Nicolas Liochon commented on HBASE-10080:
-

real issue:
{code}
test_Hbase::Table_constructor_should_fail_for_non-existent_tables(Hbase::TableConstructorTest)
[./src/test/ruby/hbase/table_test.rb:33:in 
`test_Hbase::Table_constructor_should_fail_for_non-existent_tables'
 org/jruby/RubyProc.java:270:in `call'
 org/jruby/RubyKernel.java:2105:in `send'
 org/jruby/RubyArray.java:1620:in `each'
 org/jruby/RubyArray.java:1620:in `each']:
{code}
Will fix.

 Unnecessary call to locateRegion when creating an HTable instance
 -

 Key: HBASE-10080
 URL: https://issues.apache.org/jira/browse/HBASE-10080
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Trivial
 Fix For: 0.98.1

 Attachments: 10080.v1.patch, 10080.v2.patch


 It's more or less in contradiction with the objective of having lightweight 
 HTable objects and the data may be stale when we will use it 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10080) Unnecessary call to locateRegion when creating an HTable instance

2014-02-26 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913318#comment-13913318
 ] 

Nicolas Liochon commented on HBASE-10080:
-

v3. simple fix: remove the test. There is one java side already.

 Unnecessary call to locateRegion when creating an HTable instance
 -

 Key: HBASE-10080
 URL: https://issues.apache.org/jira/browse/HBASE-10080
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Trivial
 Fix For: 0.98.1

 Attachments: 10080.v1.patch, 10080.v2.patch, 10080.v3.patch


 It's more or less in contradiction with the objective of having lightweight 
 HTable objects and the data may be stale when we will use it 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10080) Unnecessary call to locateRegion when creating an HTable instance

2014-02-26 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10080:


Attachment: 10080.v3.patch

 Unnecessary call to locateRegion when creating an HTable instance
 -

 Key: HBASE-10080
 URL: https://issues.apache.org/jira/browse/HBASE-10080
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Trivial
 Fix For: 0.98.1

 Attachments: 10080.v1.patch, 10080.v2.patch, 10080.v3.patch


 It's more or less in contradiction with the objective of having lightweight 
 HTable objects and the data may be stale when we will use it 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10080) Unnecessary call to locateRegion when creating an HTable instance

2014-02-26 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10080:


Status: Patch Available  (was: Open)

 Unnecessary call to locateRegion when creating an HTable instance
 -

 Key: HBASE-10080
 URL: https://issues.apache.org/jira/browse/HBASE-10080
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.96.0, 0.98.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Trivial
 Fix For: 0.98.1

 Attachments: 10080.v1.patch, 10080.v2.patch, 10080.v3.patch


 It's more or less in contradiction with the objective of having lightweight 
 HTable objects and the data may be stale when we will use it 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10080) Unnecessary call to locateRegion when creating an HTable instance

2014-02-26 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10080:


Status: Open  (was: Patch Available)

 Unnecessary call to locateRegion when creating an HTable instance
 -

 Key: HBASE-10080
 URL: https://issues.apache.org/jira/browse/HBASE-10080
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.96.0, 0.98.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Trivial
 Fix For: 0.98.1

 Attachments: 10080.v1.patch, 10080.v2.patch, 10080.v3.patch


 It's more or less in contradiction with the objective of having lightweight 
 HTable objects and the data may be stale when we will use it 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10077) Per family WAL encryption

2014-02-26 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913322#comment-13913322
 ] 

Andrew Purtell commented on HBASE-10077:


bq. So one Mutation write will involve write to 2 WALs.  

Maybe. If the edit spans families and the families are partitioned to different 
WALs. 

bq. Atomicity for 2 WAL writes (?)

Most likely not.

The thing about security is, as soon as you need it, you must accept something 
else that sucks :-), be it a performance drop, or this. 


 Per family WAL encryption
 -

 Key: HBASE-10077
 URL: https://issues.apache.org/jira/browse/HBASE-10077
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.98.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell

 HBASE-7544 introduces WAL encryption to prevent the leakage of protected data 
 to disk by way of WAL files. However it is currently enabled globally for the 
 regionserver. Encryption of WAL entries should depend on whether or not an 
 entry in the WAL is to be stored within an encrypted column family.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10591) Sanity check table configuration in createTable

2014-02-26 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913321#comment-13913321
 ] 

Enis Soztutar commented on HBASE-10591:
---

bq. Is this intentional?
Yes it is used by tests. But we should have a setter as well. Shell sets the 
SPLIT_POLICY by calling setValue() directly, which is discouraged.  We already 
have get(). 

 Sanity check table configuration in createTable
 ---

 Key: HBASE-10591
 URL: https://issues.apache.org/jira/browse/HBASE-10591
 Project: HBase
  Issue Type: Improvement
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.99.0

 Attachments: hbase-10591_v1.patch, hbase-10591_v2.patch, 
 hbase-10591_v3.patch, hbase-10591_v4.patch, hbase-10591_v5.patch


 We had a cluster completely become unoperational, because a couple of table 
 was erroneously created with MAX_FILESIZE set to 4K, which resulted in 180K 
 regions in a short interval, and bringing the master down due to  HBASE-4246.
 We can do some sanity checking in master.createTable() and reject the 
 requests. We already check the compression there, so it seems a good place. 
 Alter table should also check for this as well.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10608) Acquire the FS Delegation Token for Secure ExportSnapshot

2014-02-26 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913328#comment-13913328
 ] 

Andrew Purtell commented on HBASE-10608:


bq. In 
/hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/FsDelegationToken.java
 the package is declared as package org.apache.hadoop.hbase.security;

Whoops. Good spotting. 

Two things:
- This issue should have been resolved after the commits.
- Well, anyway, now it's reopened needing an addendum looks like. 

 Acquire the FS Delegation Token for Secure ExportSnapshot
 -

 Key: HBASE-10608
 URL: https://issues.apache.org/jira/browse/HBASE-10608
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.98.0, 0.94.16, 0.96.1.1
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.96.2, 0.98.1, 0.99.0

 Attachments: HBASE-10608-v0.patch


 Export Snapshot is missing the delegation token acquisition for working with 
 remote secure clusters



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HBASE-10617) Value lost if $ element is before column element in json when posted to Rest Server

2014-02-26 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-10617.


Resolution: Won't Fix

bq. IIRC, this was fixed on 0.96 by switching serialization libraries.

Yes, and we can't fix it this way for 0.94 because that represents an 
incompatible change. Sadly I have to resolve this as wontfix. [~lhofhansl], 
feel free to reopen if you feel otherwise. (I doubt it.)

 Value lost if $ element is before column element in json when posted to 
 Rest Server
 ---

 Key: HBASE-10617
 URL: https://issues.apache.org/jira/browse/HBASE-10617
 Project: HBase
  Issue Type: Bug
  Components: REST
Affects Versions: 0.94.11
Reporter: Liu Shaohui
Priority: Minor

 When post following json data to rest server, it return 200, but the value is 
 null in HBase
 {code}
 {Row: { key:cjI=, Cell: {$:ZGF0YTE=, column:ZjE6YzI=}}}
 {code}
 From rest server log, we found the length of value is null after the server 
 paste the json to RowModel object
 {code}
 14/02/26 17:52:14 DEBUG rest.RowResource: PUT 
 {totalColumns:1,families:{f1:[{timestamp:9223372036854775807,qualifier:c2,vlen:0}]},row:r2}
 {code}
 When the order is that column before $,  it works fine.
 {code}
 {Row: { key:cjI=, Cell: {column:ZjE6YzI=, $:ZGF0YTE= }}}
 {code}
 DIfferent json libs may have different order of this two elements even if 
 column is put before $.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


  1   2   3   >