[jira] [Commented] (HBASE-14211) Add more rigorous integration tests of splits

2015-08-11 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14682485#comment-14682485
 ] 

Elliott Clark commented on HBASE-14211:
---

We're seeing issues with splits so lets make the tests for those more stressful.

 Add more rigorous integration tests of splits
 -

 Key: HBASE-14211
 URL: https://issues.apache.org/jira/browse/HBASE-14211
 Project: HBase
  Issue Type: Bug
Reporter: Elliott Clark
Assignee: Elliott Clark

 Add a chaos action that will turn down region size.
 * Eventually this will cause regions to split a lot.
 * It will need to have a min region size.
 Add a chaos monkey action that will change split policy
 * Change between Uniform and SplittingUpTo and back
 Add chaos monkey action that will request splits of every region.
 * When regions all reach the size a the exact same time the compactions add a 
 lot of work.
 * This simulates a very well distributed write pattern reaching the region 
 size.
 Add the ability to start with fewer regions than normal to ITBLL



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14206) MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges

2015-08-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692278#comment-14692278
 ] 

Hudson commented on HBASE-14206:


SUCCESS: Integrated in HBase-1.2-IT #84 (See 
[https://builds.apache.org/job/HBase-1.2-IT/84/])
HBASE-14206 MultiRowRangeFilter returns records whose rowKeys are out of 
allowed ranges (Anton Nazaruk) (tedyu: rev 
8954dd88f243da79d00a0f0c722238c421b40f55)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestMultiRowRangeFilter.java


 MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges
 ---

 Key: HBASE-14206
 URL: https://issues.apache.org/jira/browse/HBASE-14206
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0
 Environment: linux, java7
Reporter: Anton Nazaruk
Assignee: Anton Nazaruk
Priority: Critical
  Labels: filter
 Fix For: 2.0.0, 1.2.0, 1.1.2, 1.3.0

 Attachments: 14206-branch-1.txt, 14206-test.patch, 14206-v1.txt


 I haven't found a way to attach test program to JIRA issue, so put it below :
 {code}
 public class MultiRowRangeFilterTest {
  
 byte[] key1Start = new byte[] {-3};
 byte[] key1End  = new byte[] {-2};
 byte[] key2Start = new byte[] {5};
 byte[] key2End  = new byte[] {6};
 byte[] badKey = new byte[] {-10};
 @Test
 public void testRanges() throws IOException {
 MultiRowRangeFilter filter = new MultiRowRangeFilter(Arrays.asList(
 new MultiRowRangeFilter.RowRange(key1Start, true, key1End, 
 false),
 new MultiRowRangeFilter.RowRange(key2Start, true, key2End, 
 false)
 ));
 filter.filterRowKey(badKey, 0, 1);
 /*
 * FAILS -- includes BAD key!
 * Expected :SEEK_NEXT_USING_HINT
 * Actual   :INCLUDE
 * */
 assertEquals(Filter.ReturnCode.SEEK_NEXT_USING_HINT, 
 filter.filterKeyValue(null));
 }
 }
 {code}
 It seems to happen on 2.0.0-SNAPSHOT too, but I wasn't able to link one with 
 included class.
 I have played some time with algorithm, and found that quick fix may be 
 applied to getNextRangeIndex(byte[] rowKey) method (hbase-client:1.1.0) :
 {code}
 if (insertionPosition == 0  
 !rangeList.get(insertionPosition).contains(rowKey)) {
 return ROW_BEFORE_FIRST_RANGE;
 }
 // FIX START
 if(!this.initialized) {
 this.initialized = true;
 }
 // FIX END
 return insertionPosition;
 {code} 
 Thanks, hope it will help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14196) Do not use local thread cache of table instances in Thrift server

2015-08-11 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-14196:
-
Fix Version/s: (was: 1.1.2)
   1.1.3

 Do not use local thread cache of table instances in Thrift server
 -

 Key: HBASE-14196
 URL: https://issues.apache.org/jira/browse/HBASE-14196
 Project: HBase
  Issue Type: Bug
  Components: Thrift
Affects Versions: 0.98.13, 1.1.1, 1.0.1.1, 1.1.0.1
Reporter: Vladimir Rodionov
Assignee: Vladimir Rodionov
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.3.0, 1.1.3


 This is the antipattern. Table objects are lightweight and should not be 
 cached, besides this, underlying connections can be closed by periodic 
 connection cleaner chore (ConnectionCache) and cached table instances may 
 become invalid. This is Thrift1 specific issue. Thrift2 is OK.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14212) Add IT test for procedure-v2-based namespace DDL

2015-08-11 Thread Stephen Yuan Jiang (JIRA)
Stephen Yuan Jiang created HBASE-14212:
--

 Summary: Add IT test for procedure-v2-based namespace DDL
 Key: HBASE-14212
 URL: https://issues.apache.org/jira/browse/HBASE-14212
 Project: HBase
  Issue Type: Sub-task
  Components: proc-v2
Affects Versions: 2.0.0, 1.3.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang


Integration test for proc-v2-based table DDLs was created in HBASE-12439 during 
HBASE 1.1 release.  With HBASE-13212, proc-v2-based namespace DDLs are 
introduced.  We need to enhanced the IT from HBASE-12429 to include namespace 
DDLs.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14213) Backport HBASE-14085 Correct LICENSE and NOTICE files in artifacts to 0.94

2015-08-11 Thread Nick Dimiduk (JIRA)
Nick Dimiduk created HBASE-14213:


 Summary: Backport HBASE-14085 Correct LICENSE and NOTICE files in 
artifacts to 0.94
 Key: HBASE-14213
 URL: https://issues.apache.org/jira/browse/HBASE-14213
 Project: HBase
  Issue Type: Task
  Components: build
Reporter: Nick Dimiduk
Assignee: Sean Busbey
Priority: Blocker
 Fix For: 0.94.28


From tail of thread on HBASE-14085, opening a backport ticket for 0.94. Took 
the liberty of assigning to [~busbey].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14085) Correct LICENSE and NOTICE files in artifacts

2015-08-11 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-14085:
-
   Resolution: Fixed
Fix Version/s: (was: 0.94.28)
   Status: Resolved  (was: Patch Available)

Resolving ticket to go forward with 1.1.2 release candidates. Opened 
HBASE-14213 for tracking the 0.94 backport.

 Correct LICENSE and NOTICE files in artifacts
 -

 Key: HBASE-14085
 URL: https://issues.apache.org/jira/browse/HBASE-14085
 Project: HBase
  Issue Type: Task
  Components: build
Affects Versions: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2

 Attachments: HBASE-14085-0.98-addendum.patch, HBASE-14085.1.patch, 
 HBASE-14085.2.patch, HBASE-14085.3.patch


 +Problems:
 * checked LICENSE/NOTICE on binary
 ** binary artifact LICENSE file has not been updated to include the 
 additional license terms for contained third party dependencies
 ** binary artifact NOTICE file does not include a copyright line
 ** binary artifact NOTICE file does not appear to propagate appropriate info 
 from the NOTICE files from bundled dependencies
 * checked NOTICE on source
 ** source artifact NOTICE file does not include a copyright line
 ** source NOTICE file includes notices for third party dependencies not 
 included in the artifact
 * checked NOTICE files shipped in maven jars
 ** copyright line only says 2015 when it's very likely the contents are under 
 copyright prior to this year
 * nit: NOTICE file on jars in maven say HBase - ${module} rather than 
 Apache HBase - ${module} as required 
 refs:
 http://www.apache.org/dev/licensing-howto.html#bundled-vs-non-bundled
 http://www.apache.org/dev/licensing-howto.html#binary
 http://www.apache.org/dev/licensing-howto.html#simple



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11368) Multi-column family BulkLoad fails if compactions go on too long

2015-08-11 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14682493#comment-14682493
 ] 

Stephen Yuan Jiang commented on HBASE-11368:


[~tianq] and [~stack], any update or concern on this patch?  We have a customer 
seeing this issue recently.

 Multi-column family BulkLoad fails if compactions go on too long
 

 Key: HBASE-11368
 URL: https://issues.apache.org/jira/browse/HBASE-11368
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: Qiang Tian
 Attachments: hbase-11368-0.98.5.patch, hbase11368-master.patch, 
 key_stacktrace_hbase10882.TXT, performance_improvement_verification_98.5.patch


 Compactions take a read lock.  If a multi-column family region, before bulk 
 loading, we want to take a write lock on the region.  If the compaction takes 
 too long, the bulk load fails.
 Various recipes include:
 + Making smaller regions (lame)
 + [~victorunique] suggests major compacting just before bulk loading over in 
 HBASE-10882 as a work around.
 Does the compaction need a read lock for that long?  Does the bulk load need 
 a full write lock when multiple column families?  Can we fail more gracefully 
 at least?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14190) Assign system tables ahead of user region assignment

2015-08-11 Thread Vandana Ayyalasomayajula (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14687390#comment-14687390
 ] 

Vandana Ayyalasomayajula commented on HBASE-14190:
--

In the MetaTableAccessor class, getSystemTableRegionsAndLocations() method,  
since we know that we need system table region information and they start with 
NamespaceDescriptor.SYSTEM_NAMESPACE_NAME_STR string, can we optimize this :

{quote}
+scanMeta(connection, HConstants.EMPTY_START_ROW, HConstants.EMPTY_END_ROW,
+  QueryType.REGION, visitor);
{quote}

In this way we will avoid scanning the entire meta table.

 Assign system tables ahead of user region assignment
 

 Key: HBASE-14190
 URL: https://issues.apache.org/jira/browse/HBASE-14190
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Critical
 Attachments: 14190-v6.txt, 14190-v7.txt


 Currently the namespace table region is assigned like user regions.
 I spent several hours working with a customer where master couldn't finish 
 initialization.
 Even though master was restarted quite a few times, it went down with the 
 following:
 {code}
 2015-08-05 17:16:57,530 FATAL [hdpmaster1:6.activeMasterManager] 
 master.HMaster: Master server abort: loaded coprocessors are: []
 2015-08-05 17:16:57,530 FATAL [hdpmaster1:6.activeMasterManager] 
 master.HMaster: Unhandled exception. Starting shutdown.
 java.io.IOException: Timedout 30ms waiting for namespace table to be 
 assigned
   at 
 org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:104)
   at org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:985)
   at 
 org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:779)
   at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:182)
   at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1646)
   at java.lang.Thread.run(Thread.java:744)
 {code}
 During previous run(s), namespace table was created, hence leaving an entry 
 in hbase:meta.
 The following if block in TableNamespaceManager#start() was skipped:
 {code}
 if (!MetaTableAccessor.tableExists(masterServices.getConnection(),
   TableName.NAMESPACE_TABLE_NAME)) {
 {code}
 TableNamespaceManager#start() spins, waiting for namespace region to be 
 assigned.
 There was issue in master assigning user regions.
 We tried issuing 'assign' command from hbase shell which didn't work because 
 of the following check in MasterRpcServices#assignRegion():
 {code}
   master.checkInitialized();
 {code}
 This scenario can be avoided if we assign hbase:namespace table after 
 hbase:meta is assigned but before user table region assignment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14190) Assign system tables ahead of user region assignment

2015-08-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14190:
---
Attachment: 14190-v8.txt

Patch v8 addresses Vandana's comment by narrowing the key range for the meta 
scan.

 Assign system tables ahead of user region assignment
 

 Key: HBASE-14190
 URL: https://issues.apache.org/jira/browse/HBASE-14190
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Critical
 Attachments: 14190-v6.txt, 14190-v7.txt, 14190-v8.txt


 Currently the namespace table region is assigned like user regions.
 I spent several hours working with a customer where master couldn't finish 
 initialization.
 Even though master was restarted quite a few times, it went down with the 
 following:
 {code}
 2015-08-05 17:16:57,530 FATAL [hdpmaster1:6.activeMasterManager] 
 master.HMaster: Master server abort: loaded coprocessors are: []
 2015-08-05 17:16:57,530 FATAL [hdpmaster1:6.activeMasterManager] 
 master.HMaster: Unhandled exception. Starting shutdown.
 java.io.IOException: Timedout 30ms waiting for namespace table to be 
 assigned
   at 
 org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:104)
   at org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:985)
   at 
 org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:779)
   at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:182)
   at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1646)
   at java.lang.Thread.run(Thread.java:744)
 {code}
 During previous run(s), namespace table was created, hence leaving an entry 
 in hbase:meta.
 The following if block in TableNamespaceManager#start() was skipped:
 {code}
 if (!MetaTableAccessor.tableExists(masterServices.getConnection(),
   TableName.NAMESPACE_TABLE_NAME)) {
 {code}
 TableNamespaceManager#start() spins, waiting for namespace region to be 
 assigned.
 There was issue in master assigning user regions.
 We tried issuing 'assign' command from hbase shell which didn't work because 
 of the following check in MasterRpcServices#assignRegion():
 {code}
   master.checkInitialized();
 {code}
 This scenario can be avoided if we assign hbase:namespace table after 
 hbase:meta is assigned but before user table region assignment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-14087) ensure correct ASF policy compliant headers on source/docs

2015-08-11 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk resolved HBASE-14087.
--
   Resolution: Fixed
Fix Version/s: (was: 0.98.14)

Closing once more; opened HBASE-14213 for tracking 0.94 backport. Let me know 
if I got this wrong [~busbey] [~apurtell] [~lhofhansl].

 ensure correct ASF policy compliant headers on source/docs
 --

 Key: HBASE-14087
 URL: https://issues.apache.org/jira/browse/HBASE-14087
 Project: HBase
  Issue Type: Sub-task
  Components: build
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.2, 1.3.0

 Attachments: HBASE-14087.1.patch, HBASE-14087.2.patch, 
 HBASE-14087.2.patch


 * we have a couple of files that are missing their headers.
 * we have one file using old-style ASF copyrights



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14215) Default cost used for PrimaryRegionCountSkewCostFunction is not sufficient

2015-08-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14215:
---
Status: Patch Available  (was: Open)

 Default cost used for PrimaryRegionCountSkewCostFunction is not sufficient 
 ---

 Key: HBASE-14215
 URL: https://issues.apache.org/jira/browse/HBASE-14215
 Project: HBase
  Issue Type: Bug
  Components: Balancer
Reporter: Biju Nair
Priority: Minor
 Attachments: 14215-v1.txt


 Current multiplier of 500 used in the stochastic balancer cost function 
 ``PrimaryRegionCountSkewCostFunction`` to calculate the cost of  total 
 primary replication skew doesn't seem to be sufficient to prevent the skews 
 (Refer HBASE-14110). We would want the default cost to be a higher value so 
 that skews in primary region replica has higher cost. The following is the 
 test result by setting the multiplier value to 1 (same as the region 
 replica rack cost multiplier) on a 3 Rack 9 RS node cluster which seems to 
 get the balancer distribute the primaries uniformly.
 *Initial Primary replica distribution - using the current multiplier* 
  r1n10  102
  r1n11  85
  r1n988
  r2n10  120
  r2n11  120
  r2n9   124
  r3n10  135
  r3n11  124
  r3n9129
 *After long duration of read  writes - using current multiplier* 
  r1n10  102
  r1n11  85
  r1n988
  r2n10  120
  r2n11  120
  r2n9124
  r3n10  135
  r3n11  124
  r3n9129
 *After manual balancing*  
  r1n10  102
  r1n11  85
  r1n988
  r2n10  120
  r2n11  120
  r2n9124
  r3n10  135
  r3n11  124
  r3n9129
 *Increased multiplier for primaryRegionCountSkewCost to 1*
  r1n10  114
  r1n11  113
  r1n9114
  r2n10  114
  r2n11  114
  r2n9113
  r3n10  115
  r3n11  115
  r3n9115 
 Setting the `PrimaryRegionCountSkewCostFunction` multiplier value to 1 
 should help HBase general use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13212) Procedure V2 - master Create/Modify/Delete namespace

2015-08-11 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692652#comment-14692652
 ] 

Stephen Yuan Jiang commented on HBASE-13212:


[~enis], HBASE-14212 is created for adding IT for NS DDLs.  

We don't need shared NS lock in create table.  In the race conditions scenario 
you mentioned - create table DDL while delete namespace (or rollback create 
namespace) DDL happens at the same time, if both of them pass the pre-condition 
check (ensureNamespaceExists() is true and no more tables in the namespace).  
During the procedure execution step, either {{the creation of the table would 
fail trying to creating table directory (because the namespace directory does 
not exist)}} or {{the deletion of namespace would fail trying to deleting the 
namespace directory (because it contains table directory)}} - not beautiful 
(because it fails in the middle of procedure and triggered the unhappy rollback 
path), but we will not create a-table-in-a-non-existing namespace corruption.

Note: due to low frequency of deleting namespace, the chance that we see this 
race condition is very low.  

 Procedure V2 - master Create/Modify/Delete namespace
 

 Key: HBASE-13212
 URL: https://issues.apache.org/jira/browse/HBASE-13212
 Project: HBase
  Issue Type: Sub-task
  Components: master
Affects Versions: 2.0.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
  Labels: reliability
 Attachments: HBASE-13212.v1-master.patch, HBASE-13212.v2-master.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 master side, part of HBASE-12439
 starts up the procedure executor on the master
 and replaces the create/modify/delete namespace handlers with the procedure 
 version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14150) Add BulkLoad functionality to HBase-Spark Module

2015-08-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692653#comment-14692653
 ] 

Hadoop QA commented on HBASE-14150:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12749965/HBASE-14150.5.patch
  against master branch at commit a78e6e94994aaba2bee7747054ea9a55f1edd421.
  ATTACHMENT ID: 12749965

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15058//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15058//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15058//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15058//console

This message is automatically generated.

 Add BulkLoad functionality to HBase-Spark Module
 

 Key: HBASE-14150
 URL: https://issues.apache.org/jira/browse/HBASE-14150
 Project: HBase
  Issue Type: New Feature
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
 Fix For: 2.0.0

 Attachments: HBASE-14150.1.patch, HBASE-14150.2.patch, 
 HBASE-14150.3.patch, HBASE-14150.4.patch, HBASE-14150.5.patch


 Add on to the work done in HBASE-13992 to add functionality to do a bulk load 
 from a given RDD.
 This will do the following:
 1. figure out the number of regions and sort and partition the data correctly 
 to be written out to HFiles
 2. Also unlike the MR bulkload I would like that the columns to be sorted in 
 the shuffle stage and not in the memory of the reducer.  This will allow this 
 design to support super wide records with out going out of memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14150) Add BulkLoad functionality to HBase-Spark Module

2015-08-11 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692726#comment-14692726
 ] 

Andrew Purtell commented on HBASE-14150:


+1

 Add BulkLoad functionality to HBase-Spark Module
 

 Key: HBASE-14150
 URL: https://issues.apache.org/jira/browse/HBASE-14150
 Project: HBase
  Issue Type: New Feature
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
 Fix For: 2.0.0

 Attachments: HBASE-14150.1.patch, HBASE-14150.2.patch, 
 HBASE-14150.3.patch, HBASE-14150.4.patch, HBASE-14150.5.patch


 Add on to the work done in HBASE-13992 to add functionality to do a bulk load 
 from a given RDD.
 This will do the following:
 1. figure out the number of regions and sort and partition the data correctly 
 to be written out to HFiles
 2. Also unlike the MR bulkload I would like that the columns to be sorted in 
 the shuffle stage and not in the memory of the reducer.  This will allow this 
 design to support super wide records with out going out of memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13889) Fix hbase-shaded-client artifact so it works on hbase-downstreamer

2015-08-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692725#comment-14692725
 ] 

Hadoop QA commented on HBASE-13889:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12749959/HBASE-13889.patch
  against master branch at commit a78e6e94994aaba2bee7747054ea9a55f1edd421.
  ATTACHMENT ID: 12749959

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev-support patch that doesn't require tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+
shadedPatternorg.apache.hadoop.hbase.shaded.com.google/shadedPattern
+
shadedPatternorg.apache.hadoop.hbase.shaded.com.jcraft/shadedPattern
+
shadedPatternorg.apache.hadoop.hbase.shaded.com.thoughtworks/shadedPattern
+
shadedPatternorg.apache.hadoop.hbase.shaded.com.jamesmurty/shadedPattern
+
shadedPatternorg.apache.hadoop.hbase.shaded.com.lmax/shadedPattern
+
shadedPatternorg.apache.hadoop.hbase.shaded.com.yammer/shadedPattern
+
shadedPatternorg.apache.hadoop.hbase.shaded.io.netty/shadedPattern
+
shadedPatternorg.apache.hadoop.hbase.shaded.org.codehaus/shadedPattern
+
shadedPatternorg.apache.hadoop.hbase.shaded.org.jcodings/shadedPattern
+
shadedPatternorg.apache.hadoop.hbase.shaded.org.joni/shadedPattern

{color:red}-1 site{color}.  The patch appears to cause mvn post-site goal 
to fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.mapreduce.TestImportExport

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15061//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15061//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15061//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15061//console

This message is automatically generated.

 Fix hbase-shaded-client artifact so it works on hbase-downstreamer
 --

 Key: HBASE-13889
 URL: https://issues.apache.org/jira/browse/HBASE-13889
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 1.1.0, 1.1.0.1
 Environment: N/A?
Reporter: Dmitry Minkovsky
Assignee: Elliott Clark
Priority: Critical
 Fix For: 2.0.0, 1.2.0, 1.1.2

 Attachments: 13889.wip.patch, HBASE-13889.patch, HBASE-13889.patch, 
 Screen Shot 2015-06-11 at 10.59.55 AM.png


 The {{hbase-shaded-client}} artifact was introduced in 
 [HBASE-13517|https://issues.apache.org/jira/browse/HBASE-13517]. Thank you 
 very much for this, as I am new to Java building and was having a very 
 slow-moving time resolving conflicts. However, the shaded client artifact 
 seems to be missing {{javax.xml.transform.TransformerException}}.  I examined 
 the JAR, which does not have this package/class.
 Steps to reproduce:
 Java: 
 {code}
 package com.mycompany.app;
   
   
   
   

[jira] [Commented] (HBASE-13889) Fix hbase-shaded-client artifact so it works on hbase-downstreamer

2015-08-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692826#comment-14692826
 ] 

Hudson commented on HBASE-13889:


SUCCESS: Integrated in HBase-1.3-IT #84 (See 
[https://builds.apache.org/job/HBase-1.3-IT/84/])
HBASE-13889 Fix hbase-shaded-client artifact so it works on hbase-downstreamer 
(eclark: rev d50c55d9da81613c596f2292a1ed7c9a0175e28e)
* pom.xml
* hbase-shaded/pom.xml


 Fix hbase-shaded-client artifact so it works on hbase-downstreamer
 --

 Key: HBASE-13889
 URL: https://issues.apache.org/jira/browse/HBASE-13889
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 1.1.0, 1.1.0.1
 Environment: N/A?
Reporter: Dmitry Minkovsky
Assignee: Elliott Clark
Priority: Critical
 Fix For: 2.0.0, 1.2.0, 1.1.2

 Attachments: 13889.wip.patch, HBASE-13889.patch, HBASE-13889.patch, 
 Screen Shot 2015-06-11 at 10.59.55 AM.png


 The {{hbase-shaded-client}} artifact was introduced in 
 [HBASE-13517|https://issues.apache.org/jira/browse/HBASE-13517]. Thank you 
 very much for this, as I am new to Java building and was having a very 
 slow-moving time resolving conflicts. However, the shaded client artifact 
 seems to be missing {{javax.xml.transform.TransformerException}}.  I examined 
 the JAR, which does not have this package/class.
 Steps to reproduce:
 Java: 
 {code}
 package com.mycompany.app;
   
   
   
   
   
 import org.apache.hadoop.conf.Configuration;  
   
   
 import org.apache.hadoop.hbase.HBaseConfiguration;
   
   
 import org.apache.hadoop.hbase.client.Connection; 
   
   
 import org.apache.hadoop.hbase.client.ConnectionFactory;  
   
   
   
   
   
 public class App {
   

 public static void main( String[] args ) throws java.io.IOException { 
   
   
 
 Configuration config = HBaseConfiguration.create();   
   
   
 Connection connection = ConnectionFactory.createConnection(config);   
   
   
 } 
   
   
 }
 {code}
 POM:
 {code}
 project xmlns=http://maven.apache.org/POM/4.0.0; 
 xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; 
  
   xsi:schemaLocation=http://maven.apache.org/POM/4.0.0 
 http://maven.apache.org/xsd/maven-4.0.0.xsd; 
 
   modelVersion4.0.0/modelVersion  
   
   
   
   
   
   groupIdcom.mycompany.app/groupId
   
   
   artifactIdmy-app/artifactId 

[jira] [Updated] (HBASE-14190) Assign system tables ahead of user region assignment

2015-08-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14190:
---
Attachment: (was: 14190-v8.txt)

 Assign system tables ahead of user region assignment
 

 Key: HBASE-14190
 URL: https://issues.apache.org/jira/browse/HBASE-14190
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Critical
 Attachments: 14190-v6.txt, 14190-v7.txt


 Currently the namespace table region is assigned like user regions.
 I spent several hours working with a customer where master couldn't finish 
 initialization.
 Even though master was restarted quite a few times, it went down with the 
 following:
 {code}
 2015-08-05 17:16:57,530 FATAL [hdpmaster1:6.activeMasterManager] 
 master.HMaster: Master server abort: loaded coprocessors are: []
 2015-08-05 17:16:57,530 FATAL [hdpmaster1:6.activeMasterManager] 
 master.HMaster: Unhandled exception. Starting shutdown.
 java.io.IOException: Timedout 30ms waiting for namespace table to be 
 assigned
   at 
 org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:104)
   at org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:985)
   at 
 org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:779)
   at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:182)
   at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1646)
   at java.lang.Thread.run(Thread.java:744)
 {code}
 During previous run(s), namespace table was created, hence leaving an entry 
 in hbase:meta.
 The following if block in TableNamespaceManager#start() was skipped:
 {code}
 if (!MetaTableAccessor.tableExists(masterServices.getConnection(),
   TableName.NAMESPACE_TABLE_NAME)) {
 {code}
 TableNamespaceManager#start() spins, waiting for namespace region to be 
 assigned.
 There was issue in master assigning user regions.
 We tried issuing 'assign' command from hbase shell which didn't work because 
 of the following check in MasterRpcServices#assignRegion():
 {code}
   master.checkInitialized();
 {code}
 This scenario can be avoided if we assign hbase:namespace table after 
 hbase:meta is assigned but before user table region assignment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10844) Coprocessor failure during batchmutation leaves the memstore datastructs in an inconsistent state

2015-08-11 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692418#comment-14692418
 ] 

Andrew Purtell commented on HBASE-10844:


+1, the v2 patch addresses my concerns about only warning previously

 Coprocessor failure during batchmutation leaves the memstore datastructs in 
 an inconsistent state
 -

 Key: HBASE-10844
 URL: https://issues.apache.org/jira/browse/HBASE-10844
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Devaraj Das
Assignee: Devaraj Das
 Attachments: 10844-1-0.98.txt, 10844-1.txt, 10844-v2.patch


 Observed this in the testing with Phoenix. The test in Phoenix - 
 MutableIndexFailureIT deliberately fails the batchmutation call via the 
 installed coprocessor. But the update is not rolled back. That leaves the 
 memstore inconsistent. In particular, I observed that getFlushableSize is 
 updated before the coprocessor was called but the update is not rolled back. 
 When the region is being closed at some later point, the assert introduced in 
 HBASE-10514 in the HRegion.doClose() causes the RegionServer to shutdown 
 abnormally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14190) Assign system tables ahead of user region assignment

2015-08-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14190:
---
Attachment: 14190-v8.txt

 Assign system tables ahead of user region assignment
 

 Key: HBASE-14190
 URL: https://issues.apache.org/jira/browse/HBASE-14190
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Critical
 Attachments: 14190-v6.txt, 14190-v7.txt, 14190-v8.txt


 Currently the namespace table region is assigned like user regions.
 I spent several hours working with a customer where master couldn't finish 
 initialization.
 Even though master was restarted quite a few times, it went down with the 
 following:
 {code}
 2015-08-05 17:16:57,530 FATAL [hdpmaster1:6.activeMasterManager] 
 master.HMaster: Master server abort: loaded coprocessors are: []
 2015-08-05 17:16:57,530 FATAL [hdpmaster1:6.activeMasterManager] 
 master.HMaster: Unhandled exception. Starting shutdown.
 java.io.IOException: Timedout 30ms waiting for namespace table to be 
 assigned
   at 
 org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:104)
   at org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:985)
   at 
 org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:779)
   at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:182)
   at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1646)
   at java.lang.Thread.run(Thread.java:744)
 {code}
 During previous run(s), namespace table was created, hence leaving an entry 
 in hbase:meta.
 The following if block in TableNamespaceManager#start() was skipped:
 {code}
 if (!MetaTableAccessor.tableExists(masterServices.getConnection(),
   TableName.NAMESPACE_TABLE_NAME)) {
 {code}
 TableNamespaceManager#start() spins, waiting for namespace region to be 
 assigned.
 There was issue in master assigning user regions.
 We tried issuing 'assign' command from hbase shell which didn't work because 
 of the following check in MasterRpcServices#assignRegion():
 {code}
   master.checkInitialized();
 {code}
 This scenario can be avoided if we assign hbase:namespace table after 
 hbase:meta is assigned but before user table region assignment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14208) Remove yarn dependencies on -common and -client

2015-08-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692423#comment-14692423
 ] 

Hudson commented on HBASE-14208:


FAILURE: Integrated in HBase-TRUNK #6715 (See 
[https://builds.apache.org/job/HBase-TRUNK/6715/])
HBASE-14208 Remove yarn dependencies on -common and -client (eclark: rev 
38b94709ee3727832cb58446b4fa60cf5c37b9a6)
* hbase-client/pom.xml
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/security/User.java
* hbase-common/pom.xml
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java


 Remove yarn dependencies on -common and -client
 ---

 Key: HBASE-14208
 URL: https://issues.apache.org/jira/browse/HBASE-14208
 Project: HBase
  Issue Type: Bug
  Components: build, Client
Affects Versions: 2.0.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0

 Attachments: HBASE-14208-v1.patch, HBASE-14208.patch


 They aren't really needed since MR can't be used without server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14201) hbck should not take a lock unless fixing errors

2015-08-11 Thread Simon Law (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Law updated HBASE-14201:
--
Attachment: HBASE-14201-v1.patch

Revised patch with fixed tests.

 hbck should not take a lock unless fixing errors
 

 Key: HBASE-14201
 URL: https://issues.apache.org/jira/browse/HBASE-14201
 Project: HBase
  Issue Type: Bug
  Components: hbck, util
Affects Versions: 2.0.0, 1.3.0
Reporter: Simon Law
Assignee: Simon Law
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-14201-v0.patch, HBASE-14201-v1.patch


 By default, hbck is run in a read-only checker mode. In this case, it is
 sensible to let others run. By default, the balancer is left alone,
 which may cause spurious errors, but cannot leave the balancer in a bad
 state. It is dangerous to leave the balancer by accident, so it is only
 ever enabled after fixing, it will never be forced off because of
 racing.
 When hbck is run in fixer mode, it must take an exclusive lock and
 disable the balancer, or all havoc will break loose.
 If you want to stop hbck from running in parallel, the -exclusive flag
 will create the lock file. If you want to force -disableBalancer, that
 option is available too. This makes more semantic sense than -noLock and
 -noSwitchBalancer, respectively.
 This task is related to HBASE-14092.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14194) Undeprecate methods in ThriftServerRunner.HBaseHandler

2015-08-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681354#comment-14681354
 ] 

Hudson commented on HBASE-14194:


FAILURE: Integrated in HBase-1.3-IT #82 (See 
[https://builds.apache.org/job/HBase-1.3-IT/82/])
HBASE-14194 Undeprecate methods in ThriftServerRunner.HBaseHandler (apurtell: 
rev c07eb21e4be74cac4756cf44331269257ac56daa)
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java


 Undeprecate methods in ThriftServerRunner.HBaseHandler
 --

 Key: HBASE-14194
 URL: https://issues.apache.org/jira/browse/HBASE-14194
 Project: HBase
  Issue Type: Improvement
Reporter: Lars Francke
Assignee: Lars Francke
Priority: Trivial
 Fix For: 2.0.0, 1.2.0, 1.3.0

 Attachments: HBASE-14194.patch


 The methods {{get}}, {{getVer}}, {{getVerTs}}, {{atomicIncrement}} were 
 deprecated back in HBASE-1304. My guess is this was because it wasn't 
 distinguishing between column family and column qualifier but I'm not sure. 
 Either way it's been in there for six years without documentation or a 
 deprecation at the interface level so it adds to my confusion and I'll attach 
 a patch to remove the deprecations.
 I guess at one point the whole old Thrift server will be deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5878) Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.

2015-08-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681356#comment-14681356
 ] 

Hudson commented on HBASE-5878:
---

FAILURE: Integrated in HBase-1.3-IT #82 (See 
[https://builds.apache.org/job/HBase-1.3-IT/82/])
HBASE-5878 Use getVisibleLength public api from HdfsDataInputStream from 
Hadoop-2. (apurtell: rev 0862abd6599a6936fb8079f4c70afc660175ba11)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogReader.java


 Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.
 ---

 Key: HBASE-5878
 URL: https://issues.apache.org/jira/browse/HBASE-5878
 Project: HBase
  Issue Type: Bug
  Components: wal
Reporter: Uma Maheswara Rao G
Assignee: Ashish Singhi
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0

 Attachments: HBASE-5878-branch-1.0.patch, HBASE-5878-v2.patch, 
 HBASE-5878-v3.patch, HBASE-5878-v4.patch, HBASE-5878-v5-0.98.patch, 
 HBASE-5878-v5.patch, HBASE-5878-v5.patch, HBASE-5878-v6-0.98.patch, 
 HBASE-5878-v6.patch, HBASE-5878-v7-0.98.patch, HBASE-5878.patch


 SequencFileLogReader: 
 Currently Hbase using getFileLength api from DFSInputStream class by 
 reflection. DFSInputStream is not exposed as public. So, this may change in 
 future. Now HDFS exposed HdfsDataInputStream as public API.
 We can make use of it, when we are not able to find the getFileLength api 
 from DFSInputStream as a else condition. So, that we will not have any sudden 
 surprise like we are facing today.
 Also,  it is just logging one warn message and proceeding if it throws any 
 exception while getting the length. I think we can re-throw the exception 
 because there is no point in continuing with dataloss.
 {code}
 long adjust = 0;
   try {
 Field fIn = FilterInputStream.class.getDeclaredField(in);
 fIn.setAccessible(true);
 Object realIn = fIn.get(this.in);
 // In hadoop 0.22, DFSInputStream is a standalone class.  Before 
 this,
 // it was an inner class of DFSClient.
 if (realIn.getClass().getName().endsWith(DFSInputStream)) {
   Method getFileLength = realIn.getClass().
 getDeclaredMethod(getFileLength, new Class? []{});
   getFileLength.setAccessible(true);
   long realLength = ((Long)getFileLength.
 invoke(realIn, new Object []{})).longValue();
   assert(realLength = this.length);
   adjust = realLength - this.length;
 } else {
   LOG.info(Input stream class:  + realIn.getClass().getName() +
   , not adjusting length);
 }
   } catch(Exception e) {
 SequenceFileLogReader.LOG.warn(
   Error while trying to get accurate file length.   +
   Truncation / data loss may occur if RegionServers die., e);
   }
   return adjust + super.getPos();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14206) MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges

2015-08-11 Thread Anton Nazaruk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Nazaruk updated HBASE-14206:
--
Priority: Critical  (was: Major)

 MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges
 ---

 Key: HBASE-14206
 URL: https://issues.apache.org/jira/browse/HBASE-14206
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0
 Environment: linux, java7
Reporter: Anton Nazaruk
Assignee: Ted Yu
Priority: Critical
  Labels: filter

 I haven't found a way to attach test program to JIRA issue, so put it below :
 {code}
 public class MultiRowRangeFilterTest {
  
 byte[] key1Start = new byte[] {-3};
 byte[] key1End  = new byte[] {-2};
 byte[] key2Start = new byte[] {5};
 byte[] key2End  = new byte[] {6};
 byte[] badKey = new byte[] {-10};
 @Test
 public void testRanges() throws IOException {
 MultiRowRangeFilter filter = new MultiRowRangeFilter(Arrays.asList(
 new MultiRowRangeFilter.RowRange(key1Start, true, key1End, 
 false),
 new MultiRowRangeFilter.RowRange(key2Start, true, key2End, 
 false)
 ));
 filter.filterRowKey(badKey, 0, 1);
 /*
 * FAILS -- includes BAD key!
 * Expected :SEEK_NEXT_USING_HINT
 * Actual   :INCLUDE
 * */
 assertEquals(Filter.ReturnCode.SEEK_NEXT_USING_HINT, 
 filter.filterKeyValue(null));
 }
 }
 {code}
 It seems to happen on 2.0.0-SNAPSHOT too, but I wasn't able to link one with 
 included class.
 I have played some time with algorithm, and found that quick fix may be 
 applied to getNextRangeIndex(byte[] rowKey) method (hbase-client:1.1.0) :
 {code}
 if (insertionPosition == 0  
 !rangeList.get(insertionPosition).contains(rowKey)) {
 return ROW_BEFORE_FIRST_RANGE;
 }
 // FIX START
 if(!this.initialized) {
 this.initialized = true;
 }
 // FIX END
 return insertionPosition;
 {code} 
 Thanks, hope it will help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13062) Add documentation coverage for configuring dns server with thrift and rest gateways

2015-08-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681428#comment-14681428
 ] 

Hadoop QA commented on HBASE-13062:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12749762/HBASE-13062-v1.patch
  against master branch at commit 3d5801602da7cde1f20bdd4b898e8b3cac77f2a3.
  ATTACHMENT ID: 12749762

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.lens.cube.parse.TestCubeRewriter.testMaxCoveringFact(TestCubeRewriter.java:154)
at 
org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:80)
at org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
at 
org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
at org.testng.TestRunner.privateRun(TestRunner.java:767)
at org.testng.TestRunner.run(TestRunner.java:617)
at org.testng.SuiteRunner.runTest(SuiteRunner.java:334)
at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:329)
at org.testng.SuiteRunner.privateRun(SuiteRunner.java:291)
at org.testng.SuiteRunner.run(SuiteRunner.java:240)
at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
at org.testng.TestNG.runSuitesSequentially(TestNG.java:1198)
at org.testng.TestNG.runSuitesLocally(TestNG.java:1123)
at org.testng.TestNG.run(TestNG.java:1031)
at 
org.apache.maven.surefire.testng.TestNGExecutor.run(TestNGExecutor.java:69)
at 
org.apache.maven.surefire.testng.TestNGDirectoryTestSuite.executeMulti(TestNGDirectoryTestSuite.java:181)
at 
org.apache.maven.surefire.testng.TestNGDirectoryTestSuite.execute(TestNGDirectoryTestSuite.java:99)
at 
org.apache.maven.surefire.testng.TestNGProvider.invoke(TestNGProvider.java:113)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15039//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15039//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15039//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15039//console

This message is automatically generated.

 Add documentation coverage for configuring dns server with thrift and rest 
 gateways
 ---

 Key: HBASE-13062
 URL: https://issues.apache.org/jira/browse/HBASE-13062
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: Srikanth Srungarapu
Assignee: Misty Stanley-Jones
Priority: Minor
 Attachments: HBASE-13062-v1.patch, HBASE-13062.patch


 Currently, the documentation doesn't cover about configuring DNS with thrift 
 or rest gateways, though code base does provide provision for doing so. The 
 following parameters are being used for accomplishing the same.
 For REST:
 * hbase.rest.dns.interface
 * hbase.rest.dns.nameserver
 For Thrift:
 * hbase.thrift.dns.interface
 * 

[jira] [Updated] (HBASE-14206) MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges

2015-08-11 Thread Anton Nazaruk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Nazaruk updated HBASE-14206:
--
Description: 
I haven't found a way to attach test program to JIRA issue, so put it below :

{code}
public class MultiRowRangeFilterTest {
 
byte[] key1Start = new byte[] {-3};
byte[] key1End  = new byte[] {-2};

byte[] key2Start = new byte[] {5};
byte[] key2End  = new byte[] {6};

byte[] badKey = new byte[] {-10};

@Test
public void testRanges() throws IOException {
MultiRowRangeFilter filter = new MultiRowRangeFilter(Arrays.asList(
new MultiRowRangeFilter.RowRange(key1Start, true, key1End, 
false),
new MultiRowRangeFilter.RowRange(key2Start, true, key2End, 
false)
));
filter.filterRowKey(badKey, 0, 1);
/*
* FAILS -- includes BAD key!
* Expected :SEEK_NEXT_USING_HINT
* Actual   :INCLUDE
* */
assertEquals(Filter.ReturnCode.SEEK_NEXT_USING_HINT, 
filter.filterKeyValue(null));
}
}
{code}

It seems to happen on 2.0.0-SNAPSHOT too, but I wasn't able to link one with 
included class.

I have played some time with algorithm, and found that quick fix may be applied 
to getNextRangeIndex(byte[] rowKey) method (hbase-client:1.1.0) :

{code}
if (insertionPosition == 0  
!rangeList.get(insertionPosition).contains(rowKey)) {
return ROW_BEFORE_FIRST_RANGE;
}
// FIX START
if(!this.initialized) {
this.initialized = true;
}
// FIX END
return insertionPosition;
{code} 

Thanks, hope it will help.

  was:
I haven't found a way to attach test program to JIRA issue, so put it below :

{code}
public class MultiRowRangeFilterTest {
 
byte[] key1Start =new byte[] {-3};
byte[] key1End =new byte[] {-2};

byte[] key2Start =new byte[] {5};
byte[] key2End =new byte[] {6};

byte[] badKey = new byte[] {-10};

@Test
public void testRanges() throws IOException {
MultiRowRangeFilter filter = new MultiRowRangeFilter(Arrays.asList(
new MultiRowRangeFilter.RowRange(key1Start, true, key1End, 
false),
new MultiRowRangeFilter.RowRange(key2Start, true, key2End, 
false)
));
filter.filterRowKey(badKey, 0, 1);
/*
* FAILS -- includes BAD key!
* Expected :SEEK_NEXT_USING_HINT
* Actual   :INCLUDE
* */
assertEquals(Filter.ReturnCode.SEEK_NEXT_USING_HINT, 
filter.filterKeyValue(null));
}
}
{code}

It seems to happen on 2.0.0-SNAPSHOT too, but I wasn't able to link one with 
included class.

I have played some time with algorithm, and found that quick fix may be applied 
to getNextRangeIndex(byte[] rowKey) method (hbase-client:1.1.0) :

{code}
if (insertionPosition == 0  
!rangeList.get(insertionPosition).contains(rowKey)) {
return ROW_BEFORE_FIRST_RANGE;
}
// FIX START
if(!this.initialized) {
this.initialized = true;
}
// FIX END
return insertionPosition;
{code} 

Thanks, hope it will help.


 MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges
 ---

 Key: HBASE-14206
 URL: https://issues.apache.org/jira/browse/HBASE-14206
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0
 Environment: linux, java7
Reporter: Anton Nazaruk
Assignee: Ted Yu
  Labels: filter

 I haven't found a way to attach test program to JIRA issue, so put it below :
 {code}
 public class MultiRowRangeFilterTest {
  
 byte[] key1Start = new byte[] {-3};
 byte[] key1End  = new byte[] {-2};
 byte[] key2Start = new byte[] {5};
 byte[] key2End  = new byte[] {6};
 byte[] badKey = new byte[] {-10};
 @Test
 public void testRanges() throws IOException {
 MultiRowRangeFilter filter = new MultiRowRangeFilter(Arrays.asList(
 new MultiRowRangeFilter.RowRange(key1Start, true, key1End, 
 false),
 new MultiRowRangeFilter.RowRange(key2Start, true, key2End, 
 false)
 ));
 filter.filterRowKey(badKey, 0, 1);
 /*
 * FAILS -- includes BAD key!
 * Expected :SEEK_NEXT_USING_HINT
 * Actual   :INCLUDE
 * */
 assertEquals(Filter.ReturnCode.SEEK_NEXT_USING_HINT, 
 filter.filterKeyValue(null));
 }
 }
 {code}
 It seems to happen on 2.0.0-SNAPSHOT too, but I wasn't able to link one with 
 included class.
 I have played some time with algorithm, and found that quick fix may be 
 applied to getNextRangeIndex(byte[] rowKey) method (hbase-client:1.1.0) :
 {code}
 if (insertionPosition == 0  
 !rangeList.get(insertionPosition).contains(rowKey)) {
 return ROW_BEFORE_FIRST_RANGE;
 }
 // FIX START
 if(!this.initialized) {
 this.initialized = true;
 }
 // FIX END
 return 

[jira] [Commented] (HBASE-14085) Correct LICENSE and NOTICE files in artifacts

2015-08-11 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681383#comment-14681383
 ] 

Enis Soztutar commented on HBASE-14085:
---

bq. Just saying we lived with the current state for 27 releases, no need to fix 
it now IMHO. 
Agreed. If too much work to backport to 0.94, I don't think we should block 
ourselves out of making more 0.94 releases. 

 Correct LICENSE and NOTICE files in artifacts
 -

 Key: HBASE-14085
 URL: https://issues.apache.org/jira/browse/HBASE-14085
 Project: HBase
  Issue Type: Task
  Components: build
Affects Versions: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Fix For: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2

 Attachments: HBASE-14085-0.98-addendum.patch, HBASE-14085.1.patch, 
 HBASE-14085.2.patch, HBASE-14085.3.patch


 +Problems:
 * checked LICENSE/NOTICE on binary
 ** binary artifact LICENSE file has not been updated to include the 
 additional license terms for contained third party dependencies
 ** binary artifact NOTICE file does not include a copyright line
 ** binary artifact NOTICE file does not appear to propagate appropriate info 
 from the NOTICE files from bundled dependencies
 * checked NOTICE on source
 ** source artifact NOTICE file does not include a copyright line
 ** source NOTICE file includes notices for third party dependencies not 
 included in the artifact
 * checked NOTICE files shipped in maven jars
 ** copyright line only says 2015 when it's very likely the contents are under 
 copyright prior to this year
 * nit: NOTICE file on jars in maven say HBase - ${module} rather than 
 Apache HBase - ${module} as required 
 refs:
 http://www.apache.org/dev/licensing-howto.html#bundled-vs-non-bundled
 http://www.apache.org/dev/licensing-howto.html#binary
 http://www.apache.org/dev/licensing-howto.html#simple



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13907) Document how to deploy a coprocessor

2015-08-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681411#comment-14681411
 ] 

Hadoop QA commented on HBASE-13907:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12749764/HBASE-13907-v3.patch
  against master branch at commit 3d5801602da7cde1f20bdd4b898e8b3cac77f2a3.
  ATTACHMENT ID: 12749764

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClient
  org.apache.hadoop.hbase.client.TestMultiParallel
  org.apache.hadoop.hbase.client.TestMobSnapshotFromClient
  
org.apache.hadoop.hbase.client.replication.TestReplicationAdminWithClusters
  org.apache.hadoop.hbase.client.TestFromClientSide3
  
org.apache.hadoop.hbase.client.TestCloneSnapshotFromClientWithRegionReplicas
  org.apache.hadoop.hbase.client.TestCloneSnapshotFromClient
  org.apache.hadoop.hbase.client.TestHTableMultiplexerFlushCache
  org.apache.hadoop.hbase.TestIOFencing
  org.apache.hadoop.hbase.wal.TestWALSplitCompressed
  org.apache.hadoop.hbase.client.TestSnapshotCloneIndependence
  
org.apache.hadoop.hbase.client.TestMobRestoreSnapshotFromClient
  org.apache.hadoop.hbase.client.TestAdmin2
  org.apache.hadoop.hbase.client.TestFromClientSide
  org.apache.hadoop.hbase.client.TestReplicaWithCluster
  org.apache.hadoop.hbase.master.TestDistributedLogSplitting
  org.apache.hadoop.hbase.client.TestClientPushback

 {color:red}-1 core zombie tests{color}.  There are 5 zombie test(s):   
at 
org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat2.testWritingPEData(TestHFileOutputFormat2.java:335)
at 
org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.testMRIncrementalLoadWithSplit(TestHFileOutputFormat.java:384)
at 
org.apache.hadoop.hbase.mapreduce.TestCellCounter.testCellCounterForCompleteTable(TestCellCounter.java:299)
at 
org.apache.hadoop.hbase.mapreduce.TestTableSnapshotInputFormat.testWithMapReduceImpl(TestTableSnapshotInputFormat.java:247)
at 
org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormatTestBase.testWithMapReduce(TableSnapshotInputFormatTestBase.java:112)
at 
org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormatTestBase.testWithMapReduceSingleRegion(TableSnapshotInputFormatTestBase.java:91)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15038//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15038//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15038//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15038//console

This message is automatically generated.

 Document how to deploy a coprocessor
 

 Key: HBASE-13907
 URL: https://issues.apache.org/jira/browse/HBASE-13907
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
 Attachments: HBASE-13907-1.patch, HBASE-13907-2.patch, 
 HBASE-13907-v3.patch, HBASE-13907.patch


 Capture this information:
  Where are the dependencies located for these 

[jira] [Updated] (HBASE-14206) MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges

2015-08-11 Thread Anton Nazaruk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Nazaruk updated HBASE-14206:
--
Description: 
I haven't found a way to attach test program to JIRA issue, so put it below :

{code}
public class MultiRowRangeFilterTest {
 
byte[] key1Start =new byte[] {-3};
byte[] key1End =new byte[] {-2};

byte[] key2Start =new byte[] {5};
byte[] key2End =new byte[] {6};

byte[] badKey = new byte[] {-10};

@Test
public void testRanges() throws IOException {
MultiRowRangeFilter filter = new MultiRowRangeFilter(Arrays.asList(
new MultiRowRangeFilter.RowRange(key1Start, true, key1End, 
false),
new MultiRowRangeFilter.RowRange(key2Start, true, key2End, 
false)
));
filter.filterRowKey(badKey, 0, 1);
/*
* FAILS -- includes BAD key!
* Expected :SEEK_NEXT_USING_HINT
* Actual   :INCLUDE
* */
assertEquals(Filter.ReturnCode.SEEK_NEXT_USING_HINT, 
filter.filterKeyValue(null));
}
}
{code}

It seems to happen on 2.0.0-SNAPSHOT too, but I wasn't able to link one with 
included class.

I have played some time with algorithm, and found that quick fix may be applied 
to getNextRangeIndex(byte[] rowKey) method (hbase-client:1.1.0) :

{code}
if (insertionPosition == 0  
!rangeList.get(insertionPosition).contains(rowKey)) {
return ROW_BEFORE_FIRST_RANGE;
}
// FIX START
if(!this.initialized) {
this.initialized = true;
}
// FIX END
return insertionPosition;
{code} 

Thanks, hope it will help.

  was:
I haven't found a way to attach test program to JIRA issue, so put it below :

{code}
public class MultiRowRangeFilterTest {
 
   byte[] key1Start =new byte[] {-3};
byte[] key1End =new byte[] {-2};

byte[] key2Start =new byte[] {5};
byte[] key2End =new byte[] {6};

byte[] badKey = new byte[] {-10};


@Test
public void testRanges() throws IOException {

MultiRowRangeFilter filter = new MultiRowRangeFilter(Arrays.asList(
new MultiRowRangeFilter.RowRange(key1Start, true, key1End, 
false),
new MultiRowRangeFilter.RowRange(key2Start, true, key2End, 
false)
));

filter.filterRowKey(badKey, 0, 1);

/*
* FAILS -- includes BAD key!
* Expected :SEEK_NEXT_USING_HINT
* Actual   :INCLUDE
* */
assertEquals(Filter.ReturnCode.SEEK_NEXT_USING_HINT, 
filter.filterKeyValue(null));
}
}
{code}

It seems to happen on 2.0.0-SNAPSHOT too, but I wasn't able to link one with 
included class.

I have played some time with algorithm, and found that quick fix may be applied 
to getNextRangeIndex(byte[] rowKey) method (hbase-client:1.1.0) :

{code}
if (insertionPosition == 0  
!rangeList.get(insertionPosition).contains(rowKey)) {
return ROW_BEFORE_FIRST_RANGE;
  }

 // FIX START
if(!this.initialized) {
this.initialized = true;
}
// FIX END
  return insertionPosition;
{code} 

Thanks, hope it will help.


 MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges
 ---

 Key: HBASE-14206
 URL: https://issues.apache.org/jira/browse/HBASE-14206
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0
 Environment: linux, java7
Reporter: Anton Nazaruk
Assignee: Ted Yu
  Labels: filter

 I haven't found a way to attach test program to JIRA issue, so put it below :
 {code}
 public class MultiRowRangeFilterTest {
  
 byte[] key1Start =new byte[] {-3};
 byte[] key1End =new byte[] {-2};
 byte[] key2Start =new byte[] {5};
 byte[] key2End =new byte[] {6};
 byte[] badKey = new byte[] {-10};
 @Test
 public void testRanges() throws IOException {
 MultiRowRangeFilter filter = new MultiRowRangeFilter(Arrays.asList(
 new MultiRowRangeFilter.RowRange(key1Start, true, key1End, 
 false),
 new MultiRowRangeFilter.RowRange(key2Start, true, key2End, 
 false)
 ));
 filter.filterRowKey(badKey, 0, 1);
 /*
 * FAILS -- includes BAD key!
 * Expected :SEEK_NEXT_USING_HINT
 * Actual   :INCLUDE
 * */
 assertEquals(Filter.ReturnCode.SEEK_NEXT_USING_HINT, 
 filter.filterKeyValue(null));
 }
 }
 {code}
 It seems to happen on 2.0.0-SNAPSHOT too, but I wasn't able to link one with 
 included class.
 I have played some time with algorithm, and found that quick fix may be 
 applied to getNextRangeIndex(byte[] rowKey) method (hbase-client:1.1.0) :
 {code}
 if (insertionPosition == 0  
 !rangeList.get(insertionPosition).contains(rowKey)) {
 return ROW_BEFORE_FIRST_RANGE;
 }
 // FIX START
 if(!this.initialized) {
 

[jira] [Created] (HBASE-14206) MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges

2015-08-11 Thread Anton Nazaruk (JIRA)
Anton Nazaruk created HBASE-14206:
-

 Summary: MultiRowRangeFilter returns records whose rowKeys are out 
of allowed ranges
 Key: HBASE-14206
 URL: https://issues.apache.org/jira/browse/HBASE-14206
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0
 Environment: linux, java7
Reporter: Anton Nazaruk
Assignee: Ted Yu


I haven't found a way to attach test program to JIRA issue, so put it below :

{code}
public class MultiRowRangeFilterTest {
 
   byte[] key1Start =new byte[] {-3};
byte[] key1End =new byte[] {-2};

byte[] key2Start =new byte[] {5};
byte[] key2End =new byte[] {6};

byte[] badKey = new byte[] {-10};


@Test
public void testRanges() throws IOException {

MultiRowRangeFilter filter = new MultiRowRangeFilter(Arrays.asList(
new MultiRowRangeFilter.RowRange(key1Start, true, key1End, 
false),
new MultiRowRangeFilter.RowRange(key2Start, true, key2End, 
false)
));

filter.filterRowKey(badKey, 0, 1);

/*
* FAILS -- includes BAD key!
* Expected :SEEK_NEXT_USING_HINT
* Actual   :INCLUDE
* */
assertEquals(Filter.ReturnCode.SEEK_NEXT_USING_HINT, 
filter.filterKeyValue(null));
}
}
{code}

It seems to happen on 2.0.0-SNAPSHOT too, but I wasn't able to link one with 
included class.

I have played some time with algorithm, and found that quick fix may be applied 
to getNextRangeIndex(byte[] rowKey) method (hbase-client:1.1.0) :

{code}
if (insertionPosition == 0  
!rangeList.get(insertionPosition).contains(rowKey)) {
return ROW_BEFORE_FIRST_RANGE;
  }

 // FIX START
if(!this.initialized) {
this.initialized = true;
}
// FIX END
  return insertionPosition;
{code} 

Thanks, hope it will help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14206) MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges

2015-08-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14206:
---
Attachment: 14206-test.patch

Change to TestMultiRowRangeFilter that shows the problem.
Suggested fix doesn't make the new test pass:
{code}
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.ja
index e7d8c38..1b1e1a9 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.java
@@ -237,6 +237,9 @@ public class MultiRowRangeFilter extends FilterBase {
   if (insertionPosition == 0  
!rangeList.get(insertionPosition).contains(rowKey)) {
 return ROW_BEFORE_FIRST_RANGE;
   }
+  if (!this.initialized) {
+this.initialized = true;
+  }
   return insertionPosition;
 }
 // the row key equals one of the start keys, and the the range exclude the 
start key
{code}

 MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges
 ---

 Key: HBASE-14206
 URL: https://issues.apache.org/jira/browse/HBASE-14206
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0
 Environment: linux, java7
Reporter: Anton Nazaruk
Assignee: Ted Yu
Priority: Critical
  Labels: filter
 Attachments: 14206-test.patch


 I haven't found a way to attach test program to JIRA issue, so put it below :
 {code}
 public class MultiRowRangeFilterTest {
  
 byte[] key1Start = new byte[] {-3};
 byte[] key1End  = new byte[] {-2};
 byte[] key2Start = new byte[] {5};
 byte[] key2End  = new byte[] {6};
 byte[] badKey = new byte[] {-10};
 @Test
 public void testRanges() throws IOException {
 MultiRowRangeFilter filter = new MultiRowRangeFilter(Arrays.asList(
 new MultiRowRangeFilter.RowRange(key1Start, true, key1End, 
 false),
 new MultiRowRangeFilter.RowRange(key2Start, true, key2End, 
 false)
 ));
 filter.filterRowKey(badKey, 0, 1);
 /*
 * FAILS -- includes BAD key!
 * Expected :SEEK_NEXT_USING_HINT
 * Actual   :INCLUDE
 * */
 assertEquals(Filter.ReturnCode.SEEK_NEXT_USING_HINT, 
 filter.filterKeyValue(null));
 }
 }
 {code}
 It seems to happen on 2.0.0-SNAPSHOT too, but I wasn't able to link one with 
 included class.
 I have played some time with algorithm, and found that quick fix may be 
 applied to getNextRangeIndex(byte[] rowKey) method (hbase-client:1.1.0) :
 {code}
 if (insertionPosition == 0  
 !rangeList.get(insertionPosition).contains(rowKey)) {
 return ROW_BEFORE_FIRST_RANGE;
 }
 // FIX START
 if(!this.initialized) {
 this.initialized = true;
 }
 // FIX END
 return insertionPosition;
 {code} 
 Thanks, hope it will help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13062) Add documentation coverage for configuring dns server with thrift and rest gateways

2015-08-11 Thread Srikanth Srungarapu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681414#comment-14681414
 ] 

Srikanth Srungarapu commented on HBASE-13062:
-

+1

Perfect. 

 Add documentation coverage for configuring dns server with thrift and rest 
 gateways
 ---

 Key: HBASE-13062
 URL: https://issues.apache.org/jira/browse/HBASE-13062
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: Srikanth Srungarapu
Assignee: Misty Stanley-Jones
Priority: Minor
 Attachments: HBASE-13062-v1.patch, HBASE-13062.patch


 Currently, the documentation doesn't cover about configuring DNS with thrift 
 or rest gateways, though code base does provide provision for doing so. The 
 following parameters are being used for accomplishing the same.
 For REST:
 * hbase.rest.dns.interface
 * hbase.rest.dns.nameserver
 For Thrift:
 * hbase.thrift.dns.interface
 * hbase.thrift.dns.nameserver



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14203) remove duplicate code getTableDescriptor in HTable

2015-08-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681412#comment-14681412
 ] 

Hadoop QA commented on HBASE-14203:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12749760/HBASE-14203.patch
  against master branch at commit 3d5801602da7cde1f20bdd4b898e8b3cac77f2a3.
  ATTACHMENT ID: 12749760

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
1861 checkstyle errors (more than the master's current 1858 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.TestAcidGuarantees
  
org.apache.hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.mob.mapreduce.TestMobSweeper.testSweeper(TestMobSweeper.java:195)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15040//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15040//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15040//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15040//console

This message is automatically generated.

 remove duplicate code getTableDescriptor in HTable
 --

 Key: HBASE-14203
 URL: https://issues.apache.org/jira/browse/HBASE-14203
 Project: HBase
  Issue Type: Improvement
Reporter: Heng Chen
Priority: Trivial
 Attachments: HBASE-14203.patch


 As TODO in comment said, 
 {{HTable.getTableDescriptor}} is same as {{HAdmin.getTableDescriptor}}. 
 remove the duplicate code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14082) Add replica id to JMX metrics names

2015-08-11 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681449#comment-14681449
 ] 

Enis Soztutar commented on HBASE-14082:
---

I like Lei's proposal. I think it is also similar to Elliot's one. The only 
difference is that, instead of being in a different bean it is in the same 
bean. 

In 2.0 we can do this:  
{code}
Regions: {
aaabbb_namespace: default,
aaabbb_tablename: foo,
aaabbb_replicaid: 0,
aaabbb_mutateCount: 100,
...

bbbccc_replicaid: 1,
bbbccc_mutateCount: 100,
...
{code}

and in 1.x we can do this: 
{code}
Regions: {
namespace_default_table_foo_region_aaabbb_metric_namespace: default,
namespace_default_table_foo_region_aaabbb_metric_tablename: foo,
namespace_default_table_foo_region_aaabbb_metric_replicaid: 0,
namespace_default_table_foo_region_aaabbb_metric_mutateCount: 100,
{code}


 Add replica id to JMX metrics names
 ---

 Key: HBASE-14082
 URL: https://issues.apache.org/jira/browse/HBASE-14082
 Project: HBase
  Issue Type: Improvement
  Components: metrics
Reporter: Lei Chen
Assignee: Lei Chen
 Attachments: HBASE-14082-v1.patch, HBASE-14082-v2.patch


 Today, via JMX, one cannot distinguish a primary region from a replica. A 
 possible solution is to add replica id to JMX metrics names. The benefits may 
 include, for example:
 # Knowing the latency of a read request on a replica region means the first 
 attempt to the primary region has timeout.
 # Write requests on replicas are due to the replication process, while the 
 ones on primary are from clients.
 # In case of looking for hot spots of read operations, replicas should be 
 excluded since TIMELINE reads are sent to all replicas.
 To implement, we can change the format of metrics names found at 
 {code}Hadoop-HBase-RegionServer-Regions-Attributes{code}
 from 
 {code}namespace_namespace_table_tablename_region_regionname_metric_metricname{code}
 to
 {code}namespace_namespace_table_tablename_region_regionname_replicaid_replicaid_metric_metricname{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14203) remove duplicate code getTableDescriptor in HTable

2015-08-11 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-14203:
--
Attachment: (was: HBASE-14203.patch)

 remove duplicate code getTableDescriptor in HTable
 --

 Key: HBASE-14203
 URL: https://issues.apache.org/jira/browse/HBASE-14203
 Project: HBase
  Issue Type: Improvement
Reporter: Heng Chen
Priority: Trivial
 Attachments: HBASE-14203.patch


 As TODO in comment said, 
 {{HTable.getTableDescriptor}} is same as {{HAdmin.getTableDescriptor}}. 
 remove the duplicate code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14203) remove duplicate code getTableDescriptor in HTable

2015-08-11 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-14203:
--
Attachment: HBASE-14203.patch

 remove duplicate code getTableDescriptor in HTable
 --

 Key: HBASE-14203
 URL: https://issues.apache.org/jira/browse/HBASE-14203
 Project: HBase
  Issue Type: Improvement
Reporter: Heng Chen
Priority: Trivial
 Attachments: HBASE-14203.patch


 As TODO in comment said, 
 {{HTable.getTableDescriptor}} is same as {{HAdmin.getTableDescriptor}}. 
 remove the duplicate code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14197) TestRegionServerHostname#testInvalidRegionServerHostnameAbortsServer fails in Jenkins

2015-08-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681359#comment-14681359
 ] 

Hudson commented on HBASE-14197:


FAILURE: Integrated in HBase-1.2 #100 (See 
[https://builds.apache.org/job/HBase-1.2/100/])
HBASE-14197 
TestRegionServerHostname#testInvalidRegionServerHostnameAbortsServer fails in 
Jenkins (apurtell: rev e6fb779f50f9a779302e907aa4bc7551e7f6ef0d)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerHostname.java


 TestRegionServerHostname#testInvalidRegionServerHostnameAbortsServer fails in 
 Jenkins
 -

 Key: HBASE-14197
 URL: https://issues.apache.org/jira/browse/HBASE-14197
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Fix For: 2.0.0, 1.2.0, 1.1.2, 1.3.0

 Attachments: 14197-v1.txt, 14197-v2.txt


 The following test failure can be observed in various recent Jenkins builds:
 {code}
 testInvalidRegionServerHostnameAbortsServer(org.apache.hadoop.hbase.regionserver.TestRegionServerHostname)
   Time elapsed: 9.344 sec   FAILURE!
 java.lang.AssertionError: null
   at org.junit.Assert.fail(Assert.java:86)
   at org.junit.Assert.assertTrue(Assert.java:41)
   at org.junit.Assert.assertTrue(Assert.java:52)
   at 
 org.apache.hadoop.hbase.regionserver.TestRegionServerHostname.testInvalidRegionServerHostnameAbortsServer(TestRegionServerHostname.java:65)
 {code}
 The test inspects exception message and looks for specific sentence, making 
 it vulnerable to environment changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14085) Correct LICENSE and NOTICE files in artifacts

2015-08-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681405#comment-14681405
 ] 

Hudson commented on HBASE-14085:


FAILURE: Integrated in HBase-0.98 #1073 (See 
[https://builds.apache.org/job/HBase-0.98/1073/])
Amend HBASE-14085 Update LICENSE and NOTICE files. (apurtell: rev 
8ef7678a481d4a0097a3aaf24fef45df739acfbf)
* hbase-hadoop1-compat/pom.xml


 Correct LICENSE and NOTICE files in artifacts
 -

 Key: HBASE-14085
 URL: https://issues.apache.org/jira/browse/HBASE-14085
 Project: HBase
  Issue Type: Task
  Components: build
Affects Versions: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Fix For: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2

 Attachments: HBASE-14085-0.98-addendum.patch, HBASE-14085.1.patch, 
 HBASE-14085.2.patch, HBASE-14085.3.patch


 +Problems:
 * checked LICENSE/NOTICE on binary
 ** binary artifact LICENSE file has not been updated to include the 
 additional license terms for contained third party dependencies
 ** binary artifact NOTICE file does not include a copyright line
 ** binary artifact NOTICE file does not appear to propagate appropriate info 
 from the NOTICE files from bundled dependencies
 * checked NOTICE on source
 ** source artifact NOTICE file does not include a copyright line
 ** source NOTICE file includes notices for third party dependencies not 
 included in the artifact
 * checked NOTICE files shipped in maven jars
 ** copyright line only says 2015 when it's very likely the contents are under 
 copyright prior to this year
 * nit: NOTICE file on jars in maven say HBase - ${module} rather than 
 Apache HBase - ${module} as required 
 refs:
 http://www.apache.org/dev/licensing-howto.html#bundled-vs-non-bundled
 http://www.apache.org/dev/licensing-howto.html#binary
 http://www.apache.org/dev/licensing-howto.html#simple



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14199) maven-remote-resources-plugin failure processing NOTICE.vm in hbase-assembly

2015-08-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681404#comment-14681404
 ] 

Hudson commented on HBASE-14199:


FAILURE: Integrated in HBase-0.98 #1073 (See 
[https://builds.apache.org/job/HBase-0.98/1073/])
HBASE-14199 maven-remote-resources-plugin failure processing NOTICE.vm in 
hbase-assembly (apurtell: rev a9a7582958f6a9aee711b9a264d89669baa390bf)
* hbase-resource-bundle/src/main/resources/supplemental-models.xml


 maven-remote-resources-plugin failure processing NOTICE.vm in hbase-assembly
 

 Key: HBASE-14199
 URL: https://issues.apache.org/jira/browse/HBASE-14199
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.14
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Blocker
 Fix For: 0.98.14

 Attachments: HBASE-14199-0.98.patch, HBASE-14199.patch, 
 HBASE-14199.patch


 Only seen when building 0.98 with -Dhadoop.profile=1.1. Happens with both JDK 
 6 and 7. 
 {noformat}
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process
 (default) on project hbase-assembly: Error rendering velocity resource. Error 
 invoking method
 'get(java.lang.Integer)' in java.util.ArrayList at META-INF/NOTICE.vm[line 
 275, column 22]:
 InvocationTargetException: Index: 0, Size: 0 - [Help 1]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14204) HBase Client API not working with pig 0.14 and 0.15

2015-08-11 Thread Hao Ji (JIRA)
Hao Ji created HBASE-14204:
--

 Summary: HBase Client API not working with pig 0.14 and 0.15
 Key: HBASE-14204
 URL: https://issues.apache.org/jira/browse/HBASE-14204
 Project: HBase
  Issue Type: Bug
  Components: API
Affects Versions: 1.0.1.1
 Environment: CentOS 6 
Hadoop 2.4.1
HBase 1.0.1.1
Pig 0.14.0 or 0.15.0

Reporter: Hao Ji


After upgrade hbase-0.98.3-hadoop2 to hbase-1.0.11, everything works fine, 
HMaster and RegionServers all started OK, hbase shell works OK, table scan 
works OK. Except pig script failed to store data to Hbase using 
org.apache.pig.backend.hadoop.hbase.HBaseStorage.


Detailed exception from pig.
{quote}
Pig Stack Trace
---
ERROR 1200: Pig script failed to parse:
line 13, column 0 pig script failed to validate: java.lang.RuntimeException: 
could not instantiate 'org.apache.pig.backend.hadoop.hbase.HBaseStorage' with 
arguments '[cf:*]'

Failed to parse: Pig script failed to parse:
line 13, column 0 pig script failed to validate: java.lang.RuntimeException: 
could not instantiate 'org.apache.pig.backend.hadoop.hbase.HBaseStorage' with 
arguments '[cf:*]'
at 
org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:199)
at org.apache.pig.PigServer$Graph.validateQuery(PigServer.java:1707)
at org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1680)
at org.apache.pig.PigServer.registerQuery(PigServer.java:623)
at 
org.apache.pig.tools.grunt.GruntParser.processPig(GruntParser.java:1082)
at 
org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:505)
at 
org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:230)
at 
org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:66)
at org.apache.pig.Main.run(Main.java:565)
at org.apache.pig.Main.main(Main.java:177)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by:
line 13, column 0 pig script failed to validate: java.lang.RuntimeException: 
could not instantiate 'org.apache.pig.backend.hadoop.hbase.HBaseStorage' with 
arguments '[cf:*]'
at 
org.apache.pig.parser.LogicalPlanBuilder.buildStoreOp(LogicalPlanBuilder.java:1009)
at 
org.apache.pig.parser.LogicalPlanGenerator.store_clause(LogicalPlanGenerator.java:7806)
at 
org.apache.pig.parser.LogicalPlanGenerator.op_clause(LogicalPlanGenerator.java:1669)
at 
org.apache.pig.parser.LogicalPlanGenerator.general_statement(LogicalPlanGenerator.java:1102)
at 
org.apache.pig.parser.LogicalPlanGenerator.statement(LogicalPlanGenerator.java:560)
at 
org.apache.pig.parser.LogicalPlanGenerator.query(LogicalPlanGenerator.java:421)
at 
org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:191)
... 15 more
Caused by: java.lang.RuntimeException: could not instantiate 
'org.apache.pig.backend.hadoop.hbase.HBaseStorage' with arguments '[cf:*]'
at 
org.apache.pig.impl.PigContext.instantiateFuncFromSpec(PigContext.java:772)
at 
org.apache.pig.parser.LogicalPlanBuilder.buildStoreOp(LogicalPlanBuilder.java:988)
... 21 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
at 
org.apache.pig.impl.PigContext.instantiateFuncFromSpec(PigContext.java:740)
... 22 more
Caused by: java.lang.NoSuchMethodError: 
org.apache.hadoop.hbase.client.Scan.setCacheBlocks(Z)V
at 
org.apache.pig.backend.hadoop.hbase.HBaseStorage.initScan(HBaseStorage.java:427)
at 
org.apache.pig.backend.hadoop.hbase.HBaseStorage.init(HBaseStorage.java:368)
at 
org.apache.pig.backend.hadoop.hbase.HBaseStorage.init(HBaseStorage.java:239)
... 27 more


{quote}


Here is the classpath for running pig, as you can see, I am using the 
hbase-client.1.0.1.1 version. 
{quote}
[hadoop@hadoop-master-1 ~]$ pig -useHCatalog
ls: cannot access /opt/apache-hive-0.14.0-bin/lib/slf4j-api-*.jar: No such file 
or directory
ls: cannot access 

[jira] [Commented] (HBASE-14194) Undeprecate methods in ThriftServerRunner.HBaseHandler

2015-08-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681322#comment-14681322
 ] 

Hudson commented on HBASE-14194:


SUCCESS: Integrated in HBase-1.2-IT #83 (See 
[https://builds.apache.org/job/HBase-1.2-IT/83/])
HBASE-14194 Undeprecate methods in ThriftServerRunner.HBaseHandler (apurtell: 
rev 323e48adab37926c982fac9cc7427beb0999d8fb)
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java


 Undeprecate methods in ThriftServerRunner.HBaseHandler
 --

 Key: HBASE-14194
 URL: https://issues.apache.org/jira/browse/HBASE-14194
 Project: HBase
  Issue Type: Improvement
Reporter: Lars Francke
Assignee: Lars Francke
Priority: Trivial
 Fix For: 2.0.0, 1.2.0, 1.3.0

 Attachments: HBASE-14194.patch


 The methods {{get}}, {{getVer}}, {{getVerTs}}, {{atomicIncrement}} were 
 deprecated back in HBASE-1304. My guess is this was because it wasn't 
 distinguishing between column family and column qualifier but I'm not sure. 
 Either way it's been in there for six years without documentation or a 
 deprecation at the interface level so it adds to my confusion and I'll attach 
 a patch to remove the deprecations.
 I guess at one point the whole old Thrift server will be deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13480) ShortCircuitConnection doesn't short-circuit all calls as expected

2015-08-11 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681368#comment-14681368
 ] 

Jingcheng Du commented on HBASE-13480:
--

How about to override methods of getTable in 
ConnectionUtils#createShortCircuitHConnection() for anonymous 
ConnectionAdapter, where using this ( the anonymous ConnectionAdapter) instead 
of wrappedConnection?

 ShortCircuitConnection doesn't short-circuit all calls as expected
 --

 Key: HBASE-13480
 URL: https://issues.apache.org/jira/browse/HBASE-13480
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 1.0.0, 2.0.0, 1.1.0
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 2.0.0, 1.3.0, 1.2.1, 1.0.3, 1.1.3


 Noticed the following situation in debugging unexpected unit tests failures 
 in HBASE-13351.
 {{ConnectionUtils#createShortCircuitHConnection(Connection, ServerName, 
 AdminService.BlockingInterface, ClientService.BlockingInterface)}} is 
 intended to avoid the extra RPC by calling the server's instantiation of the 
 protobuf rpc stub directly for the AdminService and ClientService.
 The problem is that this is insufficient to actually avoid extra remote 
 RPCs as all other calls to the Connection are routed to a real Connection 
 instance. As such, any object created by the real Connection (such as an 
 HTable) will use the real Connection, not the SSC.
 The end result is that 
 {{MasterRpcService#reportRegionStateTransition(RpcController, 
 ReportRegionStateTransitionRequest)}} will make additional remote RPCs over 
 what it thinks is an SSC through a {{Get}} on {{HTable}} which was 
 constructed using the SSC, but the {{Get}} itself will use the underlying 
 real Connection instead of the SSC. With insufficiently sized thread pools, 
 this has been observed to result in RPC deadlock in the HMaster where an RPC 
 attempts to make another RPC but there are no more threads available to 
 service the second RPC so the first RPC blocks indefinitely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13089) Fix test compilation error on building against htrace-3.2.0-incubating

2015-08-11 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-13089:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.1.2
   1.0.2
   Status: Resolved  (was: Patch Available)

It seems that this is committed to 1.2+. I've also cherry-picked it to 
branch-1.0 and branch-1.1. Resolving. 

 Fix test compilation error on building against htrace-3.2.0-incubating
 --

 Key: HBASE-13089
 URL: https://issues.apache.org/jira/browse/HBASE-13089
 Project: HBase
  Issue Type: Task
Reporter: Masatake Iwasaki
Assignee: Esteban Gutierrez
Priority: Minor
 Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.2, 1.3.0

 Attachments: HBASE-13089.patch


 Test compilation fails if you use htrace-3.2.0 because Span.ROOT_SPAN_ID is 
 removed. It is used in TestHTraceHooks and should be should be replaced  on 
 the next bumping of htrace version. This is not problem as far as 
 htrace-3.1.0 is used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-14117) Check DBEs where fields are being read from Bytebuffers but unused.

2015-08-11 Thread Jingcheng Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingcheng Du reassigned HBASE-14117:


Assignee: Jingcheng Du  (was: ramkrishna.s.vasudevan)

 Check DBEs where fields are being read from Bytebuffers but unused.
 ---

 Key: HBASE-14117
 URL: https://issues.apache.org/jira/browse/HBASE-14117
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: Jingcheng Du

 {code}
 public Cell getFirstKeyCellInBlock(ByteBuff block) {
 block.mark();
 block.position(Bytes.SIZEOF_INT);
 int keyLength = ByteBuff.readCompressedInt(block);
 // TODO : See if we can avoid these reads as the read values are not 
 getting used
 ByteBuff.readCompressedInt(block);
 {code}
 In DBEs many a places we read the integers just to skip them. This JIRA is to 
 see if we can avoid this and rather go position based, as per a review 
 comment in HBASE-12213.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14197) TestRegionServerHostname#testInvalidRegionServerHostnameAbortsServer fails in Jenkins

2015-08-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681323#comment-14681323
 ] 

Hudson commented on HBASE-14197:


SUCCESS: Integrated in HBase-1.2-IT #83 (See 
[https://builds.apache.org/job/HBase-1.2-IT/83/])
HBASE-14197 
TestRegionServerHostname#testInvalidRegionServerHostnameAbortsServer fails in 
Jenkins (apurtell: rev e6fb779f50f9a779302e907aa4bc7551e7f6ef0d)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerHostname.java


 TestRegionServerHostname#testInvalidRegionServerHostnameAbortsServer fails in 
 Jenkins
 -

 Key: HBASE-14197
 URL: https://issues.apache.org/jira/browse/HBASE-14197
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Fix For: 2.0.0, 1.2.0, 1.1.2, 1.3.0

 Attachments: 14197-v1.txt, 14197-v2.txt


 The following test failure can be observed in various recent Jenkins builds:
 {code}
 testInvalidRegionServerHostnameAbortsServer(org.apache.hadoop.hbase.regionserver.TestRegionServerHostname)
   Time elapsed: 9.344 sec   FAILURE!
 java.lang.AssertionError: null
   at org.junit.Assert.fail(Assert.java:86)
   at org.junit.Assert.assertTrue(Assert.java:41)
   at org.junit.Assert.assertTrue(Assert.java:52)
   at 
 org.apache.hadoop.hbase.regionserver.TestRegionServerHostname.testInvalidRegionServerHostnameAbortsServer(TestRegionServerHostname.java:65)
 {code}
 The test inspects exception message and looks for specific sentence, making 
 it vulnerable to environment changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14205) RegionCoprocessorHost System.nanoTime() performance bottleneck

2015-08-11 Thread Jan Van Besien (JIRA)
Jan Van Besien created HBASE-14205:
--

 Summary: RegionCoprocessorHost System.nanoTime() performance 
bottleneck
 Key: HBASE-14205
 URL: https://issues.apache.org/jira/browse/HBASE-14205
 Project: HBase
  Issue Type: Bug
Reporter: Jan Van Besien


The tracking of execution time of coprocessor methods introduced in HBASE-11516 
introduces 2 calls to System.nanoTime() per coprocessor method per coprocessor. 
This is resulting in a serious performance bottleneck in certain scenarios.

For example consider the scenario where many rows are being ingested (PUT) in a 
table which has multiple coprocessors (we have up to 20 coprocessors). This 
results in 8 extra calls to System.nanoTime() per row (prePut, postPut, 
postStartRegionOperation and postCloseRegionOperation) which has been seen to 
result in a 50% increase of execution time.

I think it is generally considered bad practice to measure execution times on 
such a small scale (per single operation). Also note that measurements are 
taken even for coprocessors that do not even have an actual implementation for 
certain operations, making the problem worse.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14206) MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges

2015-08-11 Thread Anton Nazaruk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681558#comment-14681558
 ] 

Anton Nazaruk commented on HBASE-14206:
---

what branch do you use? I've taken code from branch-1.1.0, fixed and test has 
passed... weird

 MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges
 ---

 Key: HBASE-14206
 URL: https://issues.apache.org/jira/browse/HBASE-14206
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0
 Environment: linux, java7
Reporter: Anton Nazaruk
Assignee: Ted Yu
Priority: Critical
  Labels: filter
 Attachments: 14206-test.patch


 I haven't found a way to attach test program to JIRA issue, so put it below :
 {code}
 public class MultiRowRangeFilterTest {
  
 byte[] key1Start = new byte[] {-3};
 byte[] key1End  = new byte[] {-2};
 byte[] key2Start = new byte[] {5};
 byte[] key2End  = new byte[] {6};
 byte[] badKey = new byte[] {-10};
 @Test
 public void testRanges() throws IOException {
 MultiRowRangeFilter filter = new MultiRowRangeFilter(Arrays.asList(
 new MultiRowRangeFilter.RowRange(key1Start, true, key1End, 
 false),
 new MultiRowRangeFilter.RowRange(key2Start, true, key2End, 
 false)
 ));
 filter.filterRowKey(badKey, 0, 1);
 /*
 * FAILS -- includes BAD key!
 * Expected :SEEK_NEXT_USING_HINT
 * Actual   :INCLUDE
 * */
 assertEquals(Filter.ReturnCode.SEEK_NEXT_USING_HINT, 
 filter.filterKeyValue(null));
 }
 }
 {code}
 It seems to happen on 2.0.0-SNAPSHOT too, but I wasn't able to link one with 
 included class.
 I have played some time with algorithm, and found that quick fix may be 
 applied to getNextRangeIndex(byte[] rowKey) method (hbase-client:1.1.0) :
 {code}
 if (insertionPosition == 0  
 !rangeList.get(insertionPosition).contains(rowKey)) {
 return ROW_BEFORE_FIRST_RANGE;
 }
 // FIX START
 if(!this.initialized) {
 this.initialized = true;
 }
 // FIX END
 return insertionPosition;
 {code} 
 Thanks, hope it will help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14203) remove duplicate code getTableDescriptor in HTable

2015-08-11 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681616#comment-14681616
 ] 

Enis Soztutar commented on HBASE-14203:
---

Creating an Admin is a costly operation. Maybe we can extract the method to be 
a static one. 
{code}
+HTableDescriptor htd = 
this.connection.getAdmin().getTableDescriptor(tableName);
{code}

 remove duplicate code getTableDescriptor in HTable
 --

 Key: HBASE-14203
 URL: https://issues.apache.org/jira/browse/HBASE-14203
 Project: HBase
  Issue Type: Improvement
Reporter: Heng Chen
Priority: Trivial
 Attachments: HBASE-14203.patch


 As TODO in comment said, 
 {{HTable.getTableDescriptor}} is same as {{HAdmin.getTableDescriptor}}. 
 remove the duplicate code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14202) Reduce garbage we create

2015-08-11 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-14202:
---
Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

 Reduce garbage we create
 

 Key: HBASE-14202
 URL: https://issues.apache.org/jira/browse/HBASE-14202
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: 2.0.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-14202.patch


 2 optimizations wrt no# short living objects we create
 1. IOEngine#read call to read from L2 cache is always creating a Pair object 
 to return the BB and MemoryType. We can avoid this by making the read API to 
 return a Cacheable. Pass the CacheableDeserializer, to be used, also to the 
 read API. Setter for MemoryType is already there in Cacheable interface.
 2. ByteBuff#asSubByteBuffer(int, int, Pair)  avoids Pair object creation 
 every time as we pass the shared Pair object. Still as pair can take only 
 Objects, the primitive int has to be boxed into an Integer object every time. 
 This can be avoided by creating a new Pair type which is a pair of an Object 
 and a primitive int.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13408) HBase In-Memory Memstore Compaction

2015-08-11 Thread Eshcar Hillel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eshcar Hillel updated HBASE-13408:
--
Status: Patch Available  (was: Open)

 HBase In-Memory Memstore Compaction
 ---

 Key: HBASE-13408
 URL: https://issues.apache.org/jira/browse/HBASE-13408
 Project: HBase
  Issue Type: New Feature
Reporter: Eshcar Hillel
 Attachments: HBASE-13408-trunk-v01.patch, 
 HBaseIn-MemoryMemstoreCompactionDesignDocument-ver02.pdf, 
 HBaseIn-MemoryMemstoreCompactionDesignDocument.pdf, 
 InMemoryMemstoreCompactionEvaluationResults.pdf


 A store unit holds a column family in a region, where the memstore is its 
 in-memory component. The memstore absorbs all updates to the store; from time 
 to time these updates are flushed to a file on disk, where they are 
 compacted. Unlike disk components, the memstore is not compacted until it is 
 written to the filesystem and optionally to block-cache. This may result in 
 underutilization of the memory due to duplicate entries per row, for example, 
 when hot data is continuously updated. 
 Generally, the faster the data is accumulated in memory, more flushes are 
 triggered, the data sinks to disk more frequently, slowing down retrieval of 
 data, even if very recent.
 In high-churn workloads, compacting the memstore can help maintain the data 
 in memory, and thereby speed up data retrieval. 
 We suggest a new compacted memstore with the following principles:
 1.The data is kept in memory for as long as possible
 2.Memstore data is either compacted or in process of being compacted 
 3.Allow a panic mode, which may interrupt an in-progress compaction and 
 force a flush of part of the memstore.
 We suggest applying this optimization only to in-memory column families.
 A design document is attached.
 This feature was previously discussed in HBASE-5311.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13089) Fix test compilation error on building against htrace-3.2.0-incubating

2015-08-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681587#comment-14681587
 ] 

Hudson commented on HBASE-13089:


SUCCESS: Integrated in HBase-1.1 #607 (See 
[https://builds.apache.org/job/HBase-1.1/607/])
HBASE-13089 Fix test compilation error on building against 
htrace-3.2.0-incubating (enis: rev b043d27f59a2c1d025ae3558052f499a554be179)
* hbase-server/src/test/java/org/apache/hadoop/hbase/trace/TestHTraceHooks.java


 Fix test compilation error on building against htrace-3.2.0-incubating
 --

 Key: HBASE-13089
 URL: https://issues.apache.org/jira/browse/HBASE-13089
 Project: HBase
  Issue Type: Task
Reporter: Masatake Iwasaki
Assignee: Esteban Gutierrez
Priority: Minor
 Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.2, 1.3.0

 Attachments: HBASE-13089.patch


 Test compilation fails if you use htrace-3.2.0 because Span.ROOT_SPAN_ID is 
 removed. It is used in TestHTraceHooks and should be should be replaced  on 
 the next bumping of htrace version. This is not problem as far as 
 htrace-3.1.0 is used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14207) Region was hijacked and remained in transition when RS failed to open a region and later regionplan changed to new RS on retry

2015-08-11 Thread Pankaj Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pankaj Kumar updated HBASE-14207:
-
Affects Version/s: 0.98.6

 Region was hijacked and remained in transition when RS failed to open a 
 region and later regionplan changed to new RS on retry
 --

 Key: HBASE-14207
 URL: https://issues.apache.org/jira/browse/HBASE-14207
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.98.6
Reporter: Pankaj Kumar
Assignee: Pankaj Kumar
Priority: Critical

 On production environment, following events happened
 1. Master is trying to assign a region to RS, but due to 
 KeeperException$SessionExpiredException RS failed to open the region.
   In RS log, saw multiple WARN log related to 
 KeeperException$SessionExpiredException 
KeeperErrorCode = Session expired for 
 /hbase/region-in-transition/08f1935d652e5dbdac09b423b8f9401b
Unable to get data of znode 
 /hbase/region-in-transition/08f1935d652e5dbdac09b423b8f9401b
 2. Master retried to assign the region to same RS, but RS again failed.
 3. On second retry new plan formed and this time plan destination (RS) is 
 different, so master send the request to new RS to open the region. But new 
 RS failed to open the region as there was server mismatch in ZNODE than the  
 expected current server name. 
 Logs Snippet:
 {noformat}
 HM
 2015-07-14 03:50:29,759 | INFO  | master:T101PC03VM13:21300 | Processing 
 08f1935d652e5dbdac09b423b8f9401b in state: M_ZK_REGION_OFFLINE | 
 org.apache.hadoop.hbase.master.AssignmentManager.processRegionsInTransition(AssignmentManager.java:644)
 2015-07-14 03:50:29,759 | INFO  | master:T101PC03VM13:21300 | Transitioned 
 {08f1935d652e5dbdac09b423b8f9401b state=OFFLINE, ts=1436817029679, 
 server=null} to {08f1935d652e5dbdac09b423b8f9401b state=PENDING_OPEN, 
 ts=1436817029759, server=T101PC03VM13,21302,1436816690692} | 
 org.apache.hadoop.hbase.master.RegionStates.updateRegionState(RegionStates.java:327)
 2015-07-14 03:50:29,760 | INFO  | master:T101PC03VM13:21300 | Processed 
 region 08f1935d652e5dbdac09b423b8f9401b in state M_ZK_REGION_OFFLINE, on 
 server: T101PC03VM13,21302,1436816690692 | 
 org.apache.hadoop.hbase.master.AssignmentManager.processRegionsInTransition(AssignmentManager.java:768)
 2015-07-14 03:50:29,800 | INFO  | 
 MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Assigning 
 INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
 T101PC03VM13,21302,1436816690692 | 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1983)
 2015-07-14 03:50:29,801 | WARN  | 
 MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Failed assignment of 
 INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
 T101PC03VM13,21302,1436816690692, trying to assign elsewhere instead; try=1 
 of 10 | 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2077)
 2015-07-14 03:50:29,802 | INFO  | 
 MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Trying to re-assign 
 INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
 the same failed server. | 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2123)
 2015-07-14 03:50:31,804 | INFO  | 
 MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Assigning 
 INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
 T101PC03VM13,21302,1436816690692 | 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1983)
 2015-07-14 03:50:31,806 | WARN  | 
 MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Failed assignment of 
 INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
 T101PC03VM13,21302,1436816690692, trying to assign elsewhere instead; try=2 
 of 10 | 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2077)
 2015-07-14 03:50:31,807 | INFO  | 
 MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Transitioned 
 {08f1935d652e5dbdac09b423b8f9401b state=PENDING_OPEN, ts=1436817031804, 
 server=T101PC03VM13,21302,1436816690692} to {08f1935d652e5dbdac09b423b8f9401b 
 state=OFFLINE, ts=1436817031807, server=T101PC03VM13,21302,1436816690692} | 
 org.apache.hadoop.hbase.master.RegionStates.updateRegionState(RegionStates.java:327)
 2015-07-14 03:50:31,807 | INFO  | 
 MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Assigning 
 INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
 T101PC03VM14,21302,1436816997967 | 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1983)
 2015-07-14 03:50:31,807 | INFO  | 
 

[jira] [Commented] (HBASE-13408) HBase In-Memory Memstore Compaction

2015-08-11 Thread Eshcar Hillel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681774#comment-14681774
 ] 

Eshcar Hillel commented on HBASE-13408:
---

We've submitted the patch that is based on trunk. This includes all the changes 
that were presented in 0.98 plus the comments from the code review and 
necessary changes to adapt the code to master branch. Also added a link to the 
review board.
Next we plan to work on WAL truncation upon memory compaction based on the 
discussion in this Jira.


 HBase In-Memory Memstore Compaction
 ---

 Key: HBASE-13408
 URL: https://issues.apache.org/jira/browse/HBASE-13408
 Project: HBase
  Issue Type: New Feature
Reporter: Eshcar Hillel
 Attachments: HBASE-13408-trunk-v01.patch, 
 HBaseIn-MemoryMemstoreCompactionDesignDocument-ver02.pdf, 
 HBaseIn-MemoryMemstoreCompactionDesignDocument.pdf, 
 InMemoryMemstoreCompactionEvaluationResults.pdf


 A store unit holds a column family in a region, where the memstore is its 
 in-memory component. The memstore absorbs all updates to the store; from time 
 to time these updates are flushed to a file on disk, where they are 
 compacted. Unlike disk components, the memstore is not compacted until it is 
 written to the filesystem and optionally to block-cache. This may result in 
 underutilization of the memory due to duplicate entries per row, for example, 
 when hot data is continuously updated. 
 Generally, the faster the data is accumulated in memory, more flushes are 
 triggered, the data sinks to disk more frequently, slowing down retrieval of 
 data, even if very recent.
 In high-churn workloads, compacting the memstore can help maintain the data 
 in memory, and thereby speed up data retrieval. 
 We suggest a new compacted memstore with the following principles:
 1.The data is kept in memory for as long as possible
 2.Memstore data is either compacted or in process of being compacted 
 3.Allow a panic mode, which may interrupt an in-progress compaction and 
 force a flush of part of the memstore.
 We suggest applying this optimization only to in-memory column families.
 A design document is attached.
 This feature was previously discussed in HBASE-5311.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14206) MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges

2015-08-11 Thread Anton Nazaruk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681632#comment-14681632
 ] 

Anton Nazaruk commented on HBASE-14206:
---

just ran TestMultiRowRangeFilter with applied changes (new test method + 
applied mentioned fix) from newest origin/master (2.0.0-SNAPSHOT) -- all tests 
are green

 MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges
 ---

 Key: HBASE-14206
 URL: https://issues.apache.org/jira/browse/HBASE-14206
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0
 Environment: linux, java7
Reporter: Anton Nazaruk
Assignee: Ted Yu
Priority: Critical
  Labels: filter
 Attachments: 14206-test.patch


 I haven't found a way to attach test program to JIRA issue, so put it below :
 {code}
 public class MultiRowRangeFilterTest {
  
 byte[] key1Start = new byte[] {-3};
 byte[] key1End  = new byte[] {-2};
 byte[] key2Start = new byte[] {5};
 byte[] key2End  = new byte[] {6};
 byte[] badKey = new byte[] {-10};
 @Test
 public void testRanges() throws IOException {
 MultiRowRangeFilter filter = new MultiRowRangeFilter(Arrays.asList(
 new MultiRowRangeFilter.RowRange(key1Start, true, key1End, 
 false),
 new MultiRowRangeFilter.RowRange(key2Start, true, key2End, 
 false)
 ));
 filter.filterRowKey(badKey, 0, 1);
 /*
 * FAILS -- includes BAD key!
 * Expected :SEEK_NEXT_USING_HINT
 * Actual   :INCLUDE
 * */
 assertEquals(Filter.ReturnCode.SEEK_NEXT_USING_HINT, 
 filter.filterKeyValue(null));
 }
 }
 {code}
 It seems to happen on 2.0.0-SNAPSHOT too, but I wasn't able to link one with 
 included class.
 I have played some time with algorithm, and found that quick fix may be 
 applied to getNextRangeIndex(byte[] rowKey) method (hbase-client:1.1.0) :
 {code}
 if (insertionPosition == 0  
 !rangeList.get(insertionPosition).contains(rowKey)) {
 return ROW_BEFORE_FIRST_RANGE;
 }
 // FIX START
 if(!this.initialized) {
 this.initialized = true;
 }
 // FIX END
 return insertionPosition;
 {code} 
 Thanks, hope it will help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5878) Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.

2015-08-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681696#comment-14681696
 ] 

Hudson commented on HBASE-5878:
---

FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #1027 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1027/])
HBASE-5878 Use getVisibleLength public api from HdfsDataInputStream from Hadoop 
2 (apurtell: rev f0e954a4d7e2e96d1fda1d6c5f125db29a552a17)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogReader.java


 Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.
 ---

 Key: HBASE-5878
 URL: https://issues.apache.org/jira/browse/HBASE-5878
 Project: HBase
  Issue Type: Bug
  Components: wal
Reporter: Uma Maheswara Rao G
Assignee: Ashish Singhi
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0

 Attachments: HBASE-5878-branch-1.0.patch, HBASE-5878-v2.patch, 
 HBASE-5878-v3.patch, HBASE-5878-v4.patch, HBASE-5878-v5-0.98.patch, 
 HBASE-5878-v5.patch, HBASE-5878-v5.patch, HBASE-5878-v6-0.98.patch, 
 HBASE-5878-v6.patch, HBASE-5878-v7-0.98.patch, HBASE-5878.patch


 SequencFileLogReader: 
 Currently Hbase using getFileLength api from DFSInputStream class by 
 reflection. DFSInputStream is not exposed as public. So, this may change in 
 future. Now HDFS exposed HdfsDataInputStream as public API.
 We can make use of it, when we are not able to find the getFileLength api 
 from DFSInputStream as a else condition. So, that we will not have any sudden 
 surprise like we are facing today.
 Also,  it is just logging one warn message and proceeding if it throws any 
 exception while getting the length. I think we can re-throw the exception 
 because there is no point in continuing with dataloss.
 {code}
 long adjust = 0;
   try {
 Field fIn = FilterInputStream.class.getDeclaredField(in);
 fIn.setAccessible(true);
 Object realIn = fIn.get(this.in);
 // In hadoop 0.22, DFSInputStream is a standalone class.  Before 
 this,
 // it was an inner class of DFSClient.
 if (realIn.getClass().getName().endsWith(DFSInputStream)) {
   Method getFileLength = realIn.getClass().
 getDeclaredMethod(getFileLength, new Class? []{});
   getFileLength.setAccessible(true);
   long realLength = ((Long)getFileLength.
 invoke(realIn, new Object []{})).longValue();
   assert(realLength = this.length);
   adjust = realLength - this.length;
 } else {
   LOG.info(Input stream class:  + realIn.getClass().getName() +
   , not adjusting length);
 }
   } catch(Exception e) {
 SequenceFileLogReader.LOG.warn(
   Error while trying to get accurate file length.   +
   Truncation / data loss may occur if RegionServers die., e);
   }
   return adjust + super.getPos();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14199) maven-remote-resources-plugin failure processing NOTICE.vm in hbase-assembly

2015-08-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681695#comment-14681695
 ] 

Hudson commented on HBASE-14199:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #1027 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1027/])
HBASE-14199 maven-remote-resources-plugin failure processing NOTICE.vm in 
hbase-assembly (apurtell: rev a9a7582958f6a9aee711b9a264d89669baa390bf)
* hbase-resource-bundle/src/main/resources/supplemental-models.xml


 maven-remote-resources-plugin failure processing NOTICE.vm in hbase-assembly
 

 Key: HBASE-14199
 URL: https://issues.apache.org/jira/browse/HBASE-14199
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.14
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Blocker
 Fix For: 0.98.14

 Attachments: HBASE-14199-0.98.patch, HBASE-14199.patch, 
 HBASE-14199.patch


 Only seen when building 0.98 with -Dhadoop.profile=1.1. Happens with both JDK 
 6 and 7. 
 {noformat}
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process
 (default) on project hbase-assembly: Error rendering velocity resource. Error 
 invoking method
 'get(java.lang.Integer)' in java.util.ArrayList at META-INF/NOTICE.vm[line 
 275, column 22]:
 InvocationTargetException: Index: 0, Size: 0 - [Help 1]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14085) Correct LICENSE and NOTICE files in artifacts

2015-08-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681697#comment-14681697
 ] 

Hudson commented on HBASE-14085:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #1027 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1027/])
Amend HBASE-14085 Update LICENSE and NOTICE files. (apurtell: rev 
8ef7678a481d4a0097a3aaf24fef45df739acfbf)
* hbase-hadoop1-compat/pom.xml


 Correct LICENSE and NOTICE files in artifacts
 -

 Key: HBASE-14085
 URL: https://issues.apache.org/jira/browse/HBASE-14085
 Project: HBase
  Issue Type: Task
  Components: build
Affects Versions: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Fix For: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2

 Attachments: HBASE-14085-0.98-addendum.patch, HBASE-14085.1.patch, 
 HBASE-14085.2.patch, HBASE-14085.3.patch


 +Problems:
 * checked LICENSE/NOTICE on binary
 ** binary artifact LICENSE file has not been updated to include the 
 additional license terms for contained third party dependencies
 ** binary artifact NOTICE file does not include a copyright line
 ** binary artifact NOTICE file does not appear to propagate appropriate info 
 from the NOTICE files from bundled dependencies
 * checked NOTICE on source
 ** source artifact NOTICE file does not include a copyright line
 ** source NOTICE file includes notices for third party dependencies not 
 included in the artifact
 * checked NOTICE files shipped in maven jars
 ** copyright line only says 2015 when it's very likely the contents are under 
 copyright prior to this year
 * nit: NOTICE file on jars in maven say HBase - ${module} rather than 
 Apache HBase - ${module} as required 
 refs:
 http://www.apache.org/dev/licensing-howto.html#bundled-vs-non-bundled
 http://www.apache.org/dev/licensing-howto.html#binary
 http://www.apache.org/dev/licensing-howto.html#simple



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14207) Region was hijacked and remained in transition when RS failed to open a region and later regionplan changed to new RS on retry

2015-08-11 Thread Pankaj Kumar (JIRA)
Pankaj Kumar created HBASE-14207:


 Summary: Region was hijacked and remained in transition when RS 
failed to open a region and later regionplan changed to new RS on retry
 Key: HBASE-14207
 URL: https://issues.apache.org/jira/browse/HBASE-14207
 Project: HBase
  Issue Type: Bug
  Components: master
Reporter: Pankaj Kumar
Assignee: Pankaj Kumar


On production environment, following events happened
1. Master is trying to assign a region to RS, but due to 
KeeperException$SessionExpiredException RS failed to open the region.
In RS log, saw multiple WARN log related to 
KeeperException$SessionExpiredException 
 KeeperErrorCode = Session expired for 
/hbase/region-in-transition/08f1935d652e5dbdac09b423b8f9401b
 Unable to get data of znode 
/hbase/region-in-transition/08f1935d652e5dbdac09b423b8f9401b
2. Master retried to assign the region to same RS, but RS again failed.
3. On second retry new plan formed and this time plan destination (RS) is 
different, so master send the request to new RS to open the region. But new RS 
failed to open the region as there was server mismatch in ZNODE than the  
expected current server name. 

Logs Snippet:

{noformat}
HM

2015-07-14 03:50:29,759 | INFO  | master:T101PC03VM13:21300 | Processing 
08f1935d652e5dbdac09b423b8f9401b in state: M_ZK_REGION_OFFLINE | 
org.apache.hadoop.hbase.master.AssignmentManager.processRegionsInTransition(AssignmentManager.java:644)
2015-07-14 03:50:29,759 | INFO  | master:T101PC03VM13:21300 | Transitioned 
{08f1935d652e5dbdac09b423b8f9401b state=OFFLINE, ts=1436817029679, server=null} 
to {08f1935d652e5dbdac09b423b8f9401b state=PENDING_OPEN, ts=1436817029759, 
server=T101PC03VM13,21302,1436816690692} | 
org.apache.hadoop.hbase.master.RegionStates.updateRegionState(RegionStates.java:327)
2015-07-14 03:50:29,760 | INFO  | master:T101PC03VM13:21300 | Processed region 
08f1935d652e5dbdac09b423b8f9401b in state M_ZK_REGION_OFFLINE, on server: 
T101PC03VM13,21302,1436816690692 | 
org.apache.hadoop.hbase.master.AssignmentManager.processRegionsInTransition(AssignmentManager.java:768)
2015-07-14 03:50:29,800 | INFO  | MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 
| Assigning 
INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
T101PC03VM13,21302,1436816690692 | 
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1983)
2015-07-14 03:50:29,801 | WARN  | MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 
| Failed assignment of 
INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
T101PC03VM13,21302,1436816690692, trying to assign elsewhere instead; try=1 of 
10 | 
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2077)
2015-07-14 03:50:29,802 | INFO  | MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 
| Trying to re-assign 
INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
the same failed server. | 
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2123)
2015-07-14 03:50:31,804 | INFO  | MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 
| Assigning 
INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
T101PC03VM13,21302,1436816690692 | 
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1983)
2015-07-14 03:50:31,806 | WARN  | MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 
| Failed assignment of 
INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
T101PC03VM13,21302,1436816690692, trying to assign elsewhere instead; try=2 of 
10 | 
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2077)
2015-07-14 03:50:31,807 | INFO  | MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 
| Transitioned {08f1935d652e5dbdac09b423b8f9401b state=PENDING_OPEN, 
ts=1436817031804, server=T101PC03VM13,21302,1436816690692} to 
{08f1935d652e5dbdac09b423b8f9401b state=OFFLINE, ts=1436817031807, 
server=T101PC03VM13,21302,1436816690692} | 
org.apache.hadoop.hbase.master.RegionStates.updateRegionState(RegionStates.java:327)
2015-07-14 03:50:31,807 | INFO  | MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 
| Assigning 
INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
T101PC03VM14,21302,1436816997967 | 
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1983)
2015-07-14 03:50:31,807 | INFO  | MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 
| Transitioned {08f1935d652e5dbdac09b423b8f9401b state=OFFLINE, 
ts=1436817031807, server=T101PC03VM13,21302,1436816690692} to 
{08f1935d652e5dbdac09b423b8f9401b state=PENDING_OPEN, ts=1436817031807, 
server=T101PC03VM14,21302,1436816997967} | 
org.apache.hadoop.hbase.master.RegionStates.updateRegionState(RegionStates.java:327)
2015-07-14 03:51:09,501 | INFO  | 

[jira] [Commented] (HBASE-14204) HBase Client API not working with pig 0.14 and 0.15

2015-08-11 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681591#comment-14681591
 ] 

Enis Soztutar commented on HBASE-14204:
---

This looks like a Pig problem or a problem with the classpath. Remember that 
HBase-0.98 versions and 1.0 versions are NOT binary compatible (means that you 
cannot swap 0.98 jars with 1.0 jars). Please ask the question in pig-user and 
hbase-user mailing lists. 

 HBase Client API not working with pig 0.14 and 0.15
 ---

 Key: HBASE-14204
 URL: https://issues.apache.org/jira/browse/HBASE-14204
 Project: HBase
  Issue Type: Bug
  Components: API
Affects Versions: 1.0.1.1
 Environment: CentOS 6 
 Hadoop 2.4.1
 HBase 1.0.1.1
 Pig 0.14.0 or 0.15.0
Reporter: Hao Ji

 After upgrade hbase-0.98.3-hadoop2 to hbase-1.0.11, everything works fine, 
 HMaster and RegionServers all started OK, hbase shell works OK, table scan 
 works OK. Except pig script failed to store data to Hbase using 
 org.apache.pig.backend.hadoop.hbase.HBaseStorage.
 Detailed exception from pig.
 {quote}
 Pig Stack Trace
 ---
 ERROR 1200: Pig script failed to parse:
 line 13, column 0 pig script failed to validate: 
 java.lang.RuntimeException: could not instantiate 
 'org.apache.pig.backend.hadoop.hbase.HBaseStorage' with arguments '[cf:*]'
 Failed to parse: Pig script failed to parse:
 line 13, column 0 pig script failed to validate: 
 java.lang.RuntimeException: could not instantiate 
 'org.apache.pig.backend.hadoop.hbase.HBaseStorage' with arguments '[cf:*]'
 at 
 org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:199)
 at org.apache.pig.PigServer$Graph.validateQuery(PigServer.java:1707)
 at org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1680)
 at org.apache.pig.PigServer.registerQuery(PigServer.java:623)
 at 
 org.apache.pig.tools.grunt.GruntParser.processPig(GruntParser.java:1082)
 at 
 org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:505)
 at 
 org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:230)
 at 
 org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
 at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:66)
 at org.apache.pig.Main.run(Main.java:565)
 at org.apache.pig.Main.main(Main.java:177)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
 Caused by:
 line 13, column 0 pig script failed to validate: 
 java.lang.RuntimeException: could not instantiate 
 'org.apache.pig.backend.hadoop.hbase.HBaseStorage' with arguments '[cf:*]'
 at 
 org.apache.pig.parser.LogicalPlanBuilder.buildStoreOp(LogicalPlanBuilder.java:1009)
 at 
 org.apache.pig.parser.LogicalPlanGenerator.store_clause(LogicalPlanGenerator.java:7806)
 at 
 org.apache.pig.parser.LogicalPlanGenerator.op_clause(LogicalPlanGenerator.java:1669)
 at 
 org.apache.pig.parser.LogicalPlanGenerator.general_statement(LogicalPlanGenerator.java:1102)
 at 
 org.apache.pig.parser.LogicalPlanGenerator.statement(LogicalPlanGenerator.java:560)
 at 
 org.apache.pig.parser.LogicalPlanGenerator.query(LogicalPlanGenerator.java:421)
 at 
 org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:191)
 ... 15 more
 Caused by: java.lang.RuntimeException: could not instantiate 
 'org.apache.pig.backend.hadoop.hbase.HBaseStorage' with arguments '[cf:*]'
 at 
 org.apache.pig.impl.PigContext.instantiateFuncFromSpec(PigContext.java:772)
 at 
 org.apache.pig.parser.LogicalPlanBuilder.buildStoreOp(LogicalPlanBuilder.java:988)
 ... 21 more
 Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
 at 
 org.apache.pig.impl.PigContext.instantiateFuncFromSpec(PigContext.java:740)
 ... 22 more
 Caused by: java.lang.NoSuchMethodError: 
 org.apache.hadoop.hbase.client.Scan.setCacheBlocks(Z)V
 at 
 org.apache.pig.backend.hadoop.hbase.HBaseStorage.initScan(HBaseStorage.java:427)
 at 
 

[jira] [Resolved] (HBASE-14204) HBase Client API not working with pig 0.14 and 0.15

2015-08-11 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar resolved HBASE-14204.
---
Resolution: Invalid

 HBase Client API not working with pig 0.14 and 0.15
 ---

 Key: HBASE-14204
 URL: https://issues.apache.org/jira/browse/HBASE-14204
 Project: HBase
  Issue Type: Bug
  Components: API
Affects Versions: 1.0.1.1
 Environment: CentOS 6 
 Hadoop 2.4.1
 HBase 1.0.1.1
 Pig 0.14.0 or 0.15.0
Reporter: Hao Ji

 After upgrade hbase-0.98.3-hadoop2 to hbase-1.0.11, everything works fine, 
 HMaster and RegionServers all started OK, hbase shell works OK, table scan 
 works OK. Except pig script failed to store data to Hbase using 
 org.apache.pig.backend.hadoop.hbase.HBaseStorage.
 Detailed exception from pig.
 {quote}
 Pig Stack Trace
 ---
 ERROR 1200: Pig script failed to parse:
 line 13, column 0 pig script failed to validate: 
 java.lang.RuntimeException: could not instantiate 
 'org.apache.pig.backend.hadoop.hbase.HBaseStorage' with arguments '[cf:*]'
 Failed to parse: Pig script failed to parse:
 line 13, column 0 pig script failed to validate: 
 java.lang.RuntimeException: could not instantiate 
 'org.apache.pig.backend.hadoop.hbase.HBaseStorage' with arguments '[cf:*]'
 at 
 org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:199)
 at org.apache.pig.PigServer$Graph.validateQuery(PigServer.java:1707)
 at org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1680)
 at org.apache.pig.PigServer.registerQuery(PigServer.java:623)
 at 
 org.apache.pig.tools.grunt.GruntParser.processPig(GruntParser.java:1082)
 at 
 org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:505)
 at 
 org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:230)
 at 
 org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
 at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:66)
 at org.apache.pig.Main.run(Main.java:565)
 at org.apache.pig.Main.main(Main.java:177)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
 Caused by:
 line 13, column 0 pig script failed to validate: 
 java.lang.RuntimeException: could not instantiate 
 'org.apache.pig.backend.hadoop.hbase.HBaseStorage' with arguments '[cf:*]'
 at 
 org.apache.pig.parser.LogicalPlanBuilder.buildStoreOp(LogicalPlanBuilder.java:1009)
 at 
 org.apache.pig.parser.LogicalPlanGenerator.store_clause(LogicalPlanGenerator.java:7806)
 at 
 org.apache.pig.parser.LogicalPlanGenerator.op_clause(LogicalPlanGenerator.java:1669)
 at 
 org.apache.pig.parser.LogicalPlanGenerator.general_statement(LogicalPlanGenerator.java:1102)
 at 
 org.apache.pig.parser.LogicalPlanGenerator.statement(LogicalPlanGenerator.java:560)
 at 
 org.apache.pig.parser.LogicalPlanGenerator.query(LogicalPlanGenerator.java:421)
 at 
 org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:191)
 ... 15 more
 Caused by: java.lang.RuntimeException: could not instantiate 
 'org.apache.pig.backend.hadoop.hbase.HBaseStorage' with arguments '[cf:*]'
 at 
 org.apache.pig.impl.PigContext.instantiateFuncFromSpec(PigContext.java:772)
 at 
 org.apache.pig.parser.LogicalPlanBuilder.buildStoreOp(LogicalPlanBuilder.java:988)
 ... 21 more
 Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
 at 
 org.apache.pig.impl.PigContext.instantiateFuncFromSpec(PigContext.java:740)
 ... 22 more
 Caused by: java.lang.NoSuchMethodError: 
 org.apache.hadoop.hbase.client.Scan.setCacheBlocks(Z)V
 at 
 org.apache.pig.backend.hadoop.hbase.HBaseStorage.initScan(HBaseStorage.java:427)
 at 
 org.apache.pig.backend.hadoop.hbase.HBaseStorage.init(HBaseStorage.java:368)
 at 
 org.apache.pig.backend.hadoop.hbase.HBaseStorage.init(HBaseStorage.java:239)
 ... 27 more
 
 {quote}
 Here is the classpath 

[jira] [Updated] (HBASE-13408) HBase In-Memory Memstore Compaction

2015-08-11 Thread Eshcar Hillel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eshcar Hillel updated HBASE-13408:
--
Attachment: HBASE-13408-trunk-v01.patch

 HBase In-Memory Memstore Compaction
 ---

 Key: HBASE-13408
 URL: https://issues.apache.org/jira/browse/HBASE-13408
 Project: HBase
  Issue Type: New Feature
Reporter: Eshcar Hillel
 Attachments: HBASE-13408-trunk-v01.patch, 
 HBaseIn-MemoryMemstoreCompactionDesignDocument-ver02.pdf, 
 HBaseIn-MemoryMemstoreCompactionDesignDocument.pdf, 
 InMemoryMemstoreCompactionEvaluationResults.pdf


 A store unit holds a column family in a region, where the memstore is its 
 in-memory component. The memstore absorbs all updates to the store; from time 
 to time these updates are flushed to a file on disk, where they are 
 compacted. Unlike disk components, the memstore is not compacted until it is 
 written to the filesystem and optionally to block-cache. This may result in 
 underutilization of the memory due to duplicate entries per row, for example, 
 when hot data is continuously updated. 
 Generally, the faster the data is accumulated in memory, more flushes are 
 triggered, the data sinks to disk more frequently, slowing down retrieval of 
 data, even if very recent.
 In high-churn workloads, compacting the memstore can help maintain the data 
 in memory, and thereby speed up data retrieval. 
 We suggest a new compacted memstore with the following principles:
 1.The data is kept in memory for as long as possible
 2.Memstore data is either compacted or in process of being compacted 
 3.Allow a panic mode, which may interrupt an in-progress compaction and 
 force a flush of part of the memstore.
 We suggest applying this optimization only to in-memory column families.
 A design document is attached.
 This feature was previously discussed in HBASE-5311.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13425) Documentation nit in REST Gateway impersonation section

2015-08-11 Thread Jeremie Gomez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681760#comment-14681760
 ] 

Jeremie Gomez commented on HBASE-13425:
---

Thank you Misty !

 Documentation nit in REST Gateway impersonation section
 ---

 Key: HBASE-13425
 URL: https://issues.apache.org/jira/browse/HBASE-13425
 Project: HBase
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.0.0
Reporter: Jeremie Gomez
Assignee: Misty Stanley-Jones
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-13425.patch


 In section 55.8. REST Gateway Impersonation Configuration, there is another 
 property that needs to be set (and thus documented).
 After this sentence (To enable REST gateway impersonation, add the following 
 to the hbase-site.xml file for every REST gateway.), we should add :
 property
namehbase.rest.support.proxyuser/name
 valuetrue/value
 /property
 It not set, doing a curl call on the rest gateway gives the error support 
 for proxyuser is not configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14207) Region was hijacked and remained in transition when RS failed to open a region and later regionplan changed to new RS on retry

2015-08-11 Thread Pankaj Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pankaj Kumar updated HBASE-14207:
-
Priority: Critical  (was: Major)

 Region was hijacked and remained in transition when RS failed to open a 
 region and later regionplan changed to new RS on retry
 --

 Key: HBASE-14207
 URL: https://issues.apache.org/jira/browse/HBASE-14207
 Project: HBase
  Issue Type: Bug
  Components: master
Reporter: Pankaj Kumar
Assignee: Pankaj Kumar
Priority: Critical

 On production environment, following events happened
 1. Master is trying to assign a region to RS, but due to 
 KeeperException$SessionExpiredException RS failed to open the region.
   In RS log, saw multiple WARN log related to 
 KeeperException$SessionExpiredException 
KeeperErrorCode = Session expired for 
 /hbase/region-in-transition/08f1935d652e5dbdac09b423b8f9401b
Unable to get data of znode 
 /hbase/region-in-transition/08f1935d652e5dbdac09b423b8f9401b
 2. Master retried to assign the region to same RS, but RS again failed.
 3. On second retry new plan formed and this time plan destination (RS) is 
 different, so master send the request to new RS to open the region. But new 
 RS failed to open the region as there was server mismatch in ZNODE than the  
 expected current server name. 
 Logs Snippet:
 {noformat}
 HM
 2015-07-14 03:50:29,759 | INFO  | master:T101PC03VM13:21300 | Processing 
 08f1935d652e5dbdac09b423b8f9401b in state: M_ZK_REGION_OFFLINE | 
 org.apache.hadoop.hbase.master.AssignmentManager.processRegionsInTransition(AssignmentManager.java:644)
 2015-07-14 03:50:29,759 | INFO  | master:T101PC03VM13:21300 | Transitioned 
 {08f1935d652e5dbdac09b423b8f9401b state=OFFLINE, ts=1436817029679, 
 server=null} to {08f1935d652e5dbdac09b423b8f9401b state=PENDING_OPEN, 
 ts=1436817029759, server=T101PC03VM13,21302,1436816690692} | 
 org.apache.hadoop.hbase.master.RegionStates.updateRegionState(RegionStates.java:327)
 2015-07-14 03:50:29,760 | INFO  | master:T101PC03VM13:21300 | Processed 
 region 08f1935d652e5dbdac09b423b8f9401b in state M_ZK_REGION_OFFLINE, on 
 server: T101PC03VM13,21302,1436816690692 | 
 org.apache.hadoop.hbase.master.AssignmentManager.processRegionsInTransition(AssignmentManager.java:768)
 2015-07-14 03:50:29,800 | INFO  | 
 MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Assigning 
 INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
 T101PC03VM13,21302,1436816690692 | 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1983)
 2015-07-14 03:50:29,801 | WARN  | 
 MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Failed assignment of 
 INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
 T101PC03VM13,21302,1436816690692, trying to assign elsewhere instead; try=1 
 of 10 | 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2077)
 2015-07-14 03:50:29,802 | INFO  | 
 MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Trying to re-assign 
 INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
 the same failed server. | 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2123)
 2015-07-14 03:50:31,804 | INFO  | 
 MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Assigning 
 INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
 T101PC03VM13,21302,1436816690692 | 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1983)
 2015-07-14 03:50:31,806 | WARN  | 
 MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Failed assignment of 
 INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
 T101PC03VM13,21302,1436816690692, trying to assign elsewhere instead; try=2 
 of 10 | 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2077)
 2015-07-14 03:50:31,807 | INFO  | 
 MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Transitioned 
 {08f1935d652e5dbdac09b423b8f9401b state=PENDING_OPEN, ts=1436817031804, 
 server=T101PC03VM13,21302,1436816690692} to {08f1935d652e5dbdac09b423b8f9401b 
 state=OFFLINE, ts=1436817031807, server=T101PC03VM13,21302,1436816690692} | 
 org.apache.hadoop.hbase.master.RegionStates.updateRegionState(RegionStates.java:327)
 2015-07-14 03:50:31,807 | INFO  | 
 MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Assigning 
 INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
 T101PC03VM14,21302,1436816997967 | 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1983)
 2015-07-14 03:50:31,807 | INFO  | 
 MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | 

[jira] [Commented] (HBASE-14207) Region was hijacked and remained in transition when RS failed to open a region and later regionplan changed to new RS on retry

2015-08-11 Thread Pankaj Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681777#comment-14681777
 ] 

Pankaj Kumar commented on HBASE-14207:
--

Since ZNODE was not modified with new destination server detail, so RS failed 
to open it. 

On Plan change we should set true to 'setOfflineInZK', so that ZNODE will be 
modified with new destination server detail after resetting 
'versionOfOfflineNode' to -1.
{code}
  if (plan != newPlan  
!plan.getDestination().equals(newPlan.getDestination())) {
// Clean out plan we failed execute and one that doesn't look like 
it'll
// succeed anyways; we need a new plan!
// Transition back to OFFLINE
currentState = regionStates.updateRegionState(region, 
State.OFFLINE);
versionOfOfflineNode = -1;
plan = newPlan;
  } 
{code}
Please correct me if I am wrong.

 Region was hijacked and remained in transition when RS failed to open a 
 region and later regionplan changed to new RS on retry
 --

 Key: HBASE-14207
 URL: https://issues.apache.org/jira/browse/HBASE-14207
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.98.6
Reporter: Pankaj Kumar
Assignee: Pankaj Kumar
Priority: Critical

 On production environment, following events happened
 1. Master is trying to assign a region to RS, but due to 
 KeeperException$SessionExpiredException RS failed to open the region.
   In RS log, saw multiple WARN log related to 
 KeeperException$SessionExpiredException 
KeeperErrorCode = Session expired for 
 /hbase/region-in-transition/08f1935d652e5dbdac09b423b8f9401b
Unable to get data of znode 
 /hbase/region-in-transition/08f1935d652e5dbdac09b423b8f9401b
 2. Master retried to assign the region to same RS, but RS again failed.
 3. On second retry new plan formed and this time plan destination (RS) is 
 different, so master send the request to new RS to open the region. But new 
 RS failed to open the region as there was server mismatch in ZNODE than the  
 expected current server name. 
 Logs Snippet:
 {noformat}
 HM
 2015-07-14 03:50:29,759 | INFO  | master:T101PC03VM13:21300 | Processing 
 08f1935d652e5dbdac09b423b8f9401b in state: M_ZK_REGION_OFFLINE | 
 org.apache.hadoop.hbase.master.AssignmentManager.processRegionsInTransition(AssignmentManager.java:644)
 2015-07-14 03:50:29,759 | INFO  | master:T101PC03VM13:21300 | Transitioned 
 {08f1935d652e5dbdac09b423b8f9401b state=OFFLINE, ts=1436817029679, 
 server=null} to {08f1935d652e5dbdac09b423b8f9401b state=PENDING_OPEN, 
 ts=1436817029759, server=T101PC03VM13,21302,1436816690692} | 
 org.apache.hadoop.hbase.master.RegionStates.updateRegionState(RegionStates.java:327)
 2015-07-14 03:50:29,760 | INFO  | master:T101PC03VM13:21300 | Processed 
 region 08f1935d652e5dbdac09b423b8f9401b in state M_ZK_REGION_OFFLINE, on 
 server: T101PC03VM13,21302,1436816690692 | 
 org.apache.hadoop.hbase.master.AssignmentManager.processRegionsInTransition(AssignmentManager.java:768)
 2015-07-14 03:50:29,800 | INFO  | 
 MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Assigning 
 INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
 T101PC03VM13,21302,1436816690692 | 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1983)
 2015-07-14 03:50:29,801 | WARN  | 
 MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Failed assignment of 
 INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
 T101PC03VM13,21302,1436816690692, trying to assign elsewhere instead; try=1 
 of 10 | 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2077)
 2015-07-14 03:50:29,802 | INFO  | 
 MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Trying to re-assign 
 INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
 the same failed server. | 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2123)
 2015-07-14 03:50:31,804 | INFO  | 
 MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Assigning 
 INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
 T101PC03VM13,21302,1436816690692 | 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1983)
 2015-07-14 03:50:31,806 | WARN  | 
 MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Failed assignment of 
 INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
 T101PC03VM13,21302,1436816690692, trying to assign elsewhere instead; try=2 
 of 10 | 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2077)
 2015-07-14 

[jira] [Updated] (HBASE-14207) Region was hijacked and remained in transition when RS failed to open a region and later regionplan changed to new RS on retry

2015-08-11 Thread Pankaj Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pankaj Kumar updated HBASE-14207:
-
Description: 
On production environment, following events happened
1. Master is trying to assign a region to RS, but due to 
KeeperException$SessionExpiredException RS failed to open the region.
In RS log, saw multiple WARN log related to 
KeeperException$SessionExpiredException 
 KeeperErrorCode = Session expired for 
/hbase/region-in-transition/08f1935d652e5dbdac09b423b8f9401b
 Unable to get data of znode 
/hbase/region-in-transition/08f1935d652e5dbdac09b423b8f9401b
2. Master retried to assign the region to same RS, but RS again failed.
3. On second retry new plan formed and this time plan destination (RS) is 
different, so master send the request to new RS to open the region. But new RS 
failed to open the region as there was server mismatch in ZNODE than the  
expected current server name. 

Logs Snippet:

{noformat}
HM

2015-07-14 03:50:29,759 | INFO  | master:T101PC03VM13:21300 | Processing 
08f1935d652e5dbdac09b423b8f9401b in state: M_ZK_REGION_OFFLINE | 
org.apache.hadoop.hbase.master.AssignmentManager.processRegionsInTransition(AssignmentManager.java:644)
2015-07-14 03:50:29,759 | INFO  | master:T101PC03VM13:21300 | Transitioned 
{08f1935d652e5dbdac09b423b8f9401b state=OFFLINE, ts=1436817029679, server=null} 
to {08f1935d652e5dbdac09b423b8f9401b state=PENDING_OPEN, ts=1436817029759, 
server=T101PC03VM13,21302,1436816690692} | 
org.apache.hadoop.hbase.master.RegionStates.updateRegionState(RegionStates.java:327)
2015-07-14 03:50:29,760 | INFO  | master:T101PC03VM13:21300 | Processed region 
08f1935d652e5dbdac09b423b8f9401b in state M_ZK_REGION_OFFLINE, on server: 
T101PC03VM13,21302,1436816690692 | 
org.apache.hadoop.hbase.master.AssignmentManager.processRegionsInTransition(AssignmentManager.java:768)
2015-07-14 03:50:29,800 | INFO  | MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 
| Assigning 
INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
T101PC03VM13,21302,1436816690692 | 
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1983)
2015-07-14 03:50:29,801 | WARN  | MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 
| Failed assignment of 
INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
T101PC03VM13,21302,1436816690692, trying to assign elsewhere instead; try=1 of 
10 | 
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2077)
2015-07-14 03:50:29,802 | INFO  | MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 
| Trying to re-assign 
INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
the same failed server. | 
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2123)
2015-07-14 03:50:31,804 | INFO  | MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 
| Assigning 
INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
T101PC03VM13,21302,1436816690692 | 
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1983)
2015-07-14 03:50:31,806 | WARN  | MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 
| Failed assignment of 
INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
T101PC03VM13,21302,1436816690692, trying to assign elsewhere instead; try=2 of 
10 | 
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2077)
2015-07-14 03:50:31,807 | INFO  | MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 
| Transitioned {08f1935d652e5dbdac09b423b8f9401b state=PENDING_OPEN, 
ts=1436817031804, server=T101PC03VM13,21302,1436816690692} to 
{08f1935d652e5dbdac09b423b8f9401b state=OFFLINE, ts=1436817031807, 
server=T101PC03VM13,21302,1436816690692} | 
org.apache.hadoop.hbase.master.RegionStates.updateRegionState(RegionStates.java:327)
2015-07-14 03:50:31,807 | INFO  | MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 
| Assigning 
INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
T101PC03VM14,21302,1436816997967 | 
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1983)
2015-07-14 03:50:31,807 | INFO  | MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 
| Transitioned {08f1935d652e5dbdac09b423b8f9401b state=OFFLINE, 
ts=1436817031807, server=T101PC03VM13,21302,1436816690692} to 
{08f1935d652e5dbdac09b423b8f9401b state=PENDING_OPEN, ts=1436817031807, 
server=T101PC03VM14,21302,1436816997967} | 
org.apache.hadoop.hbase.master.RegionStates.updateRegionState(RegionStates.java:327)
2015-07-14 03:51:09,501 | INFO  | MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-4 
| Skip assigning region in transition on other 
server{08f1935d652e5dbdac09b423b8f9401b state=PENDING_OPEN, ts=1436817031807, 
server=T101PC03VM14,21302,1436816997967} | 

[jira] [Commented] (HBASE-14206) MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges

2015-08-11 Thread Jiajia Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681831#comment-14681831
 ] 

Jiajia Li commented on HBASE-14206:
---

I run the test based on trunk, but with one minor change in test:
{code}
filter.filterRowKey(badKey, 0, 1);
{code}
to
{code}
filter.filterRowKey(KeyValueUtil.createFirstOnRow(badKey));
{code}
I think the fix is ok.

 MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges
 ---

 Key: HBASE-14206
 URL: https://issues.apache.org/jira/browse/HBASE-14206
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0
 Environment: linux, java7
Reporter: Anton Nazaruk
Assignee: Ted Yu
Priority: Critical
  Labels: filter
 Attachments: 14206-test.patch


 I haven't found a way to attach test program to JIRA issue, so put it below :
 {code}
 public class MultiRowRangeFilterTest {
  
 byte[] key1Start = new byte[] {-3};
 byte[] key1End  = new byte[] {-2};
 byte[] key2Start = new byte[] {5};
 byte[] key2End  = new byte[] {6};
 byte[] badKey = new byte[] {-10};
 @Test
 public void testRanges() throws IOException {
 MultiRowRangeFilter filter = new MultiRowRangeFilter(Arrays.asList(
 new MultiRowRangeFilter.RowRange(key1Start, true, key1End, 
 false),
 new MultiRowRangeFilter.RowRange(key2Start, true, key2End, 
 false)
 ));
 filter.filterRowKey(badKey, 0, 1);
 /*
 * FAILS -- includes BAD key!
 * Expected :SEEK_NEXT_USING_HINT
 * Actual   :INCLUDE
 * */
 assertEquals(Filter.ReturnCode.SEEK_NEXT_USING_HINT, 
 filter.filterKeyValue(null));
 }
 }
 {code}
 It seems to happen on 2.0.0-SNAPSHOT too, but I wasn't able to link one with 
 included class.
 I have played some time with algorithm, and found that quick fix may be 
 applied to getNextRangeIndex(byte[] rowKey) method (hbase-client:1.1.0) :
 {code}
 if (insertionPosition == 0  
 !rangeList.get(insertionPosition).contains(rowKey)) {
 return ROW_BEFORE_FIRST_RANGE;
 }
 // FIX START
 if(!this.initialized) {
 this.initialized = true;
 }
 // FIX END
 return insertionPosition;
 {code} 
 Thanks, hope it will help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14206) MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges

2015-08-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681894#comment-14681894
 ] 

Ted Yu commented on HBASE-14206:


The problem and fix were provided by you, Anton. That was why I assigned this 
to you.

You can try applying the patch on branch-1 and 0.98 to see if any modification 
is needed.
If so, you can attach patch for the branch(es).

Include branch name in the filename. e.g. 14206-branch-1.txt for branch-1

 MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges
 ---

 Key: HBASE-14206
 URL: https://issues.apache.org/jira/browse/HBASE-14206
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0
 Environment: linux, java7
Reporter: Anton Nazaruk
Assignee: Anton Nazaruk
Priority: Critical
  Labels: filter
 Attachments: 14206-test.patch, 14206-v1.txt


 I haven't found a way to attach test program to JIRA issue, so put it below :
 {code}
 public class MultiRowRangeFilterTest {
  
 byte[] key1Start = new byte[] {-3};
 byte[] key1End  = new byte[] {-2};
 byte[] key2Start = new byte[] {5};
 byte[] key2End  = new byte[] {6};
 byte[] badKey = new byte[] {-10};
 @Test
 public void testRanges() throws IOException {
 MultiRowRangeFilter filter = new MultiRowRangeFilter(Arrays.asList(
 new MultiRowRangeFilter.RowRange(key1Start, true, key1End, 
 false),
 new MultiRowRangeFilter.RowRange(key2Start, true, key2End, 
 false)
 ));
 filter.filterRowKey(badKey, 0, 1);
 /*
 * FAILS -- includes BAD key!
 * Expected :SEEK_NEXT_USING_HINT
 * Actual   :INCLUDE
 * */
 assertEquals(Filter.ReturnCode.SEEK_NEXT_USING_HINT, 
 filter.filterKeyValue(null));
 }
 }
 {code}
 It seems to happen on 2.0.0-SNAPSHOT too, but I wasn't able to link one with 
 included class.
 I have played some time with algorithm, and found that quick fix may be 
 applied to getNextRangeIndex(byte[] rowKey) method (hbase-client:1.1.0) :
 {code}
 if (insertionPosition == 0  
 !rangeList.get(insertionPosition).contains(rowKey)) {
 return ROW_BEFORE_FIRST_RANGE;
 }
 // FIX START
 if(!this.initialized) {
 this.initialized = true;
 }
 // FIX END
 return insertionPosition;
 {code} 
 Thanks, hope it will help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14206) MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges

2015-08-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681822#comment-14681822
 ] 

Ted Yu commented on HBASE-14206:


I use master branch with suggested fix and get:
{code}
testRanges(org.apache.hadoop.hbase.filter.TestMultiRowRangeFilter)  Time 
elapsed: 0.045 sec   FAILURE!
java.lang.AssertionError: expected:SEEK_NEXT_USING_HINT but was:null
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.hadoop.hbase.filter.TestMultiRowRangeFilter.testRanges(TestMultiRowRangeFilter.java:94)
{code}

 MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges
 ---

 Key: HBASE-14206
 URL: https://issues.apache.org/jira/browse/HBASE-14206
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0
 Environment: linux, java7
Reporter: Anton Nazaruk
Assignee: Ted Yu
Priority: Critical
  Labels: filter
 Attachments: 14206-test.patch


 I haven't found a way to attach test program to JIRA issue, so put it below :
 {code}
 public class MultiRowRangeFilterTest {
  
 byte[] key1Start = new byte[] {-3};
 byte[] key1End  = new byte[] {-2};
 byte[] key2Start = new byte[] {5};
 byte[] key2End  = new byte[] {6};
 byte[] badKey = new byte[] {-10};
 @Test
 public void testRanges() throws IOException {
 MultiRowRangeFilter filter = new MultiRowRangeFilter(Arrays.asList(
 new MultiRowRangeFilter.RowRange(key1Start, true, key1End, 
 false),
 new MultiRowRangeFilter.RowRange(key2Start, true, key2End, 
 false)
 ));
 filter.filterRowKey(badKey, 0, 1);
 /*
 * FAILS -- includes BAD key!
 * Expected :SEEK_NEXT_USING_HINT
 * Actual   :INCLUDE
 * */
 assertEquals(Filter.ReturnCode.SEEK_NEXT_USING_HINT, 
 filter.filterKeyValue(null));
 }
 }
 {code}
 It seems to happen on 2.0.0-SNAPSHOT too, but I wasn't able to link one with 
 included class.
 I have played some time with algorithm, and found that quick fix may be 
 applied to getNextRangeIndex(byte[] rowKey) method (hbase-client:1.1.0) :
 {code}
 if (insertionPosition == 0  
 !rangeList.get(insertionPosition).contains(rowKey)) {
 return ROW_BEFORE_FIRST_RANGE;
 }
 // FIX START
 if(!this.initialized) {
 this.initialized = true;
 }
 // FIX END
 return insertionPosition;
 {code} 
 Thanks, hope it will help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13212) Procedure V2 - master Create/Modify/Delete namespace

2015-08-11 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681851#comment-14681851
 ] 

Matteo Bertozzi commented on HBASE-13212:
-

I don't think the table operation should have to take the NS lock. that can be 
done down at the scheduling level (procedure queue)

 Procedure V2 - master Create/Modify/Delete namespace
 

 Key: HBASE-13212
 URL: https://issues.apache.org/jira/browse/HBASE-13212
 Project: HBase
  Issue Type: Sub-task
  Components: master
Affects Versions: 2.0.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
  Labels: reliability
 Attachments: HBASE-13212.v1-master.patch, HBASE-13212.v2-master.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 master side, part of HBASE-12439
 starts up the procedure executor on the master
 and replaces the create/modify/delete namespace handlers with the procedure 
 version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14203) remove duplicate code getTableDescriptor in HTable

2015-08-11 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681891#comment-14681891
 ] 

Heng Chen commented on HBASE-14203:
---

{quote}
Creating an Admin is a costly operation. Maybe we can extract the method to be 
a static one.
{quote}

Thanks for your reply!

I check the code, it has some difficulty to extract {{getTableDescriptor}} to 
be a static one. Because in this function it calls non-static function. To 
avoid create {{Admin}} every time when call {{Connection.getAdmin()}}, we can 
use Singleton instance.

I update the patch. 



 remove duplicate code getTableDescriptor in HTable
 --

 Key: HBASE-14203
 URL: https://issues.apache.org/jira/browse/HBASE-14203
 Project: HBase
  Issue Type: Improvement
Reporter: Heng Chen
Priority: Trivial
 Attachments: HBASE-14203.patch, HBASE-14203_v2.patch


 As TODO in comment said, 
 {{HTable.getTableDescriptor}} is same as {{HAdmin.getTableDescriptor}}. 
 remove the duplicate code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-12325) Add Utility to remove snapshot from a directory

2015-08-11 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark resolved HBASE-12325.
---
Resolution: Won't Fix

The backup tool that's being created is a better way to do this.

 Add Utility to remove snapshot from a directory
 ---

 Key: HBASE-12325
 URL: https://issues.apache.org/jira/browse/HBASE-12325
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Attachments: DeleteRemoteSnapshotTool.java


 If there are several snapshots exported to a single directory, it's nice to 
 be able to remove the oldest one. Since snapshots in the same directory can 
 share files it's not as simple as just removing all files in a snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14208) Remove yarn dependencies on -common and -client

2015-08-11 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-14208:
-

 Summary: Remove yarn dependencies on -common and -client
 Key: HBASE-14208
 URL: https://issues.apache.org/jira/browse/HBASE-14208
 Project: HBase
  Issue Type: Bug
  Components: build, Client
Affects Versions: 2.0.0, 1.3.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 1.3.0


They aren't really needed since MR can't be used without server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14206) MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges

2015-08-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14206:
---
Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

 MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges
 ---

 Key: HBASE-14206
 URL: https://issues.apache.org/jira/browse/HBASE-14206
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0
 Environment: linux, java7
Reporter: Anton Nazaruk
Assignee: Anton Nazaruk
Priority: Critical
  Labels: filter
 Attachments: 14206-test.patch, 14206-v1.txt


 I haven't found a way to attach test program to JIRA issue, so put it below :
 {code}
 public class MultiRowRangeFilterTest {
  
 byte[] key1Start = new byte[] {-3};
 byte[] key1End  = new byte[] {-2};
 byte[] key2Start = new byte[] {5};
 byte[] key2End  = new byte[] {6};
 byte[] badKey = new byte[] {-10};
 @Test
 public void testRanges() throws IOException {
 MultiRowRangeFilter filter = new MultiRowRangeFilter(Arrays.asList(
 new MultiRowRangeFilter.RowRange(key1Start, true, key1End, 
 false),
 new MultiRowRangeFilter.RowRange(key2Start, true, key2End, 
 false)
 ));
 filter.filterRowKey(badKey, 0, 1);
 /*
 * FAILS -- includes BAD key!
 * Expected :SEEK_NEXT_USING_HINT
 * Actual   :INCLUDE
 * */
 assertEquals(Filter.ReturnCode.SEEK_NEXT_USING_HINT, 
 filter.filterKeyValue(null));
 }
 }
 {code}
 It seems to happen on 2.0.0-SNAPSHOT too, but I wasn't able to link one with 
 included class.
 I have played some time with algorithm, and found that quick fix may be 
 applied to getNextRangeIndex(byte[] rowKey) method (hbase-client:1.1.0) :
 {code}
 if (insertionPosition == 0  
 !rangeList.get(insertionPosition).contains(rowKey)) {
 return ROW_BEFORE_FIRST_RANGE;
 }
 // FIX START
 if(!this.initialized) {
 this.initialized = true;
 }
 // FIX END
 return insertionPosition;
 {code} 
 Thanks, hope it will help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-14206) MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges

2015-08-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-14206:
--

Assignee: Anton Nazaruk  (was: Ted Yu)

 MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges
 ---

 Key: HBASE-14206
 URL: https://issues.apache.org/jira/browse/HBASE-14206
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0
 Environment: linux, java7
Reporter: Anton Nazaruk
Assignee: Anton Nazaruk
Priority: Critical
  Labels: filter
 Attachments: 14206-test.patch


 I haven't found a way to attach test program to JIRA issue, so put it below :
 {code}
 public class MultiRowRangeFilterTest {
  
 byte[] key1Start = new byte[] {-3};
 byte[] key1End  = new byte[] {-2};
 byte[] key2Start = new byte[] {5};
 byte[] key2End  = new byte[] {6};
 byte[] badKey = new byte[] {-10};
 @Test
 public void testRanges() throws IOException {
 MultiRowRangeFilter filter = new MultiRowRangeFilter(Arrays.asList(
 new MultiRowRangeFilter.RowRange(key1Start, true, key1End, 
 false),
 new MultiRowRangeFilter.RowRange(key2Start, true, key2End, 
 false)
 ));
 filter.filterRowKey(badKey, 0, 1);
 /*
 * FAILS -- includes BAD key!
 * Expected :SEEK_NEXT_USING_HINT
 * Actual   :INCLUDE
 * */
 assertEquals(Filter.ReturnCode.SEEK_NEXT_USING_HINT, 
 filter.filterKeyValue(null));
 }
 }
 {code}
 It seems to happen on 2.0.0-SNAPSHOT too, but I wasn't able to link one with 
 included class.
 I have played some time with algorithm, and found that quick fix may be 
 applied to getNextRangeIndex(byte[] rowKey) method (hbase-client:1.1.0) :
 {code}
 if (insertionPosition == 0  
 !rangeList.get(insertionPosition).contains(rowKey)) {
 return ROW_BEFORE_FIRST_RANGE;
 }
 // FIX START
 if(!this.initialized) {
 this.initialized = true;
 }
 // FIX END
 return insertionPosition;
 {code} 
 Thanks, hope it will help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14203) remove duplicate code getTableDescriptor in HTable

2015-08-11 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-14203:
--
Attachment: HBASE-14203_v2.patch

 remove duplicate code getTableDescriptor in HTable
 --

 Key: HBASE-14203
 URL: https://issues.apache.org/jira/browse/HBASE-14203
 Project: HBase
  Issue Type: Improvement
Reporter: Heng Chen
Priority: Trivial
 Attachments: HBASE-14203.patch, HBASE-14203_v2.patch


 As TODO in comment said, 
 {{HTable.getTableDescriptor}} is same as {{HAdmin.getTableDescriptor}}. 
 remove the duplicate code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14098) Allow dropping caches behind compactions

2015-08-11 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14098:
--
Attachment: HBASE-14098-v6.patch

The patch went stale while I was on vacation. Here's a rebase.

Yes this does turn on private readers by default. We've been running it for a 
while and I haven't seen any down sides, so I feel pretty sure that it's not 
too big a risk.

 Allow dropping caches behind compactions
 

 Key: HBASE-14098
 URL: https://issues.apache.org/jira/browse/HBASE-14098
 Project: HBase
  Issue Type: Bug
  Components: Compaction, hadoop2, HFile
Affects Versions: 2.0.0, 1.3.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-14098-v1.patch, HBASE-14098-v2.patch, 
 HBASE-14098-v3.patch, HBASE-14098-v4.patch, HBASE-14098-v5.patch, 
 HBASE-14098-v6.patch, HBASE-14098.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14098) Allow dropping caches behind compactions

2015-08-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681925#comment-14681925
 ] 

Hadoop QA commented on HBASE-14098:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12749862/HBASE-14098-v6.patch
  against master branch at commit 7d4de20cafd6b765bd5f33df72fc0e630d1731f7.
  ATTACHMENT ID: 12749862

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 20 new 
or modified tests.

{color:red}-1 javac{color}.  The patch appears to cause mvn compile goal to 
fail with Hadoop version 2.4.0.

Compilation errors resume:
[ERROR] COMPILATION ERROR : 
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobFile.java:[60,57]
 no suitable method found for 
getScannersForStoreFiles(java.util.Listorg.apache.hadoop.hbase.regionserver.StoreFile,boolean,boolean,boolean,nulltype,long)
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobFile.java:[91,59]
 no suitable method found for 
getScannersForStoreFiles(java.util.Listorg.apache.hadoop.hbase.regionserver.StoreFile,boolean,boolean,boolean,nulltype,long)
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/mob/DefaultMobStoreCompactor.java:[80,3]
 method does not override or implement a method from a supertype
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/mob/compactions/PartitionedMobCompactor.java:[549,37]
 no suitable method found for 
getScannersForStoreFiles(java.util.Listorg.apache.hadoop.hbase.regionserver.StoreFile,boolean,boolean,boolean,nulltype,long)
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.2:compile (default-compile) on 
project hbase-server: Compilation failure: Compilation failure:
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobFile.java:[60,57]
 no suitable method found for 
getScannersForStoreFiles(java.util.Listorg.apache.hadoop.hbase.regionserver.StoreFile,boolean,boolean,boolean,nulltype,long)
[ERROR] method 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.getScannersForStoreFiles(java.util.Collectionorg.apache.hadoop.hbase.regionserver.StoreFile,boolean,boolean,boolean,boolean,org.apache.hadoop.hbase.regionserver.ScanQueryMatcher,long)
 is not applicable
[ERROR] (actual and formal argument lists differ in length)
[ERROR] method 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.getScannersForStoreFiles(java.util.Collectionorg.apache.hadoop.hbase.regionserver.StoreFile,boolean,boolean,boolean,boolean,long)
 is not applicable
[ERROR] (actual argument nulltype cannot be converted to boolean by method 
invocation conversion)
[ERROR] method 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.getScannersForStoreFiles(java.util.Collectionorg.apache.hadoop.hbase.regionserver.StoreFile,boolean,boolean,long)
 is not applicable
[ERROR] (actual and formal argument lists differ in length)
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobFile.java:[91,59]
 no suitable method found for 
getScannersForStoreFiles(java.util.Listorg.apache.hadoop.hbase.regionserver.StoreFile,boolean,boolean,boolean,nulltype,long)
[ERROR] method 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.getScannersForStoreFiles(java.util.Collectionorg.apache.hadoop.hbase.regionserver.StoreFile,boolean,boolean,boolean,boolean,org.apache.hadoop.hbase.regionserver.ScanQueryMatcher,long)
 is not applicable
[ERROR] (actual and formal argument lists differ in length)
[ERROR] method 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.getScannersForStoreFiles(java.util.Collectionorg.apache.hadoop.hbase.regionserver.StoreFile,boolean,boolean,boolean,boolean,long)
 is not applicable
[ERROR] (actual argument nulltype cannot be converted to boolean by method 
invocation conversion)
[ERROR] method 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.getScannersForStoreFiles(java.util.Collectionorg.apache.hadoop.hbase.regionserver.StoreFile,boolean,boolean,long)
 is not applicable
[ERROR] (actual and formal argument lists differ in length)
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/mob/DefaultMobStoreCompactor.java:[80,3]
 method does not override or implement a method from a supertype
[ERROR] 

[jira] [Commented] (HBASE-14203) remove duplicate code getTableDescriptor in HTable

2015-08-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681840#comment-14681840
 ] 

Hadoop QA commented on HBASE-14203:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12749795/HBASE-14203.patch
  against master branch at commit 3d5801602da7cde1f20bdd4b898e8b3cac77f2a3.
  ATTACHMENT ID: 12749795

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
1861 checkstyle errors (more than the master's current 1858 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15041//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15041//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15041//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15041//console

This message is automatically generated.

 remove duplicate code getTableDescriptor in HTable
 --

 Key: HBASE-14203
 URL: https://issues.apache.org/jira/browse/HBASE-14203
 Project: HBase
  Issue Type: Improvement
Reporter: Heng Chen
Priority: Trivial
 Attachments: HBASE-14203.patch


 As TODO in comment said, 
 {{HTable.getTableDescriptor}} is same as {{HAdmin.getTableDescriptor}}. 
 remove the duplicate code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14206) MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges

2015-08-11 Thread Anton Nazaruk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681846#comment-14681846
 ] 

Anton Nazaruk commented on HBASE-14206:
---

yeah, I've used the same approach as [~jiajia] in order to make it compatible 
with 2.0.0-SNAPSHOT API, sorry didn't mention it in previous comment (

 MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges
 ---

 Key: HBASE-14206
 URL: https://issues.apache.org/jira/browse/HBASE-14206
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0
 Environment: linux, java7
Reporter: Anton Nazaruk
Assignee: Anton Nazaruk
Priority: Critical
  Labels: filter
 Attachments: 14206-test.patch, 14206-v1.txt


 I haven't found a way to attach test program to JIRA issue, so put it below :
 {code}
 public class MultiRowRangeFilterTest {
  
 byte[] key1Start = new byte[] {-3};
 byte[] key1End  = new byte[] {-2};
 byte[] key2Start = new byte[] {5};
 byte[] key2End  = new byte[] {6};
 byte[] badKey = new byte[] {-10};
 @Test
 public void testRanges() throws IOException {
 MultiRowRangeFilter filter = new MultiRowRangeFilter(Arrays.asList(
 new MultiRowRangeFilter.RowRange(key1Start, true, key1End, 
 false),
 new MultiRowRangeFilter.RowRange(key2Start, true, key2End, 
 false)
 ));
 filter.filterRowKey(badKey, 0, 1);
 /*
 * FAILS -- includes BAD key!
 * Expected :SEEK_NEXT_USING_HINT
 * Actual   :INCLUDE
 * */
 assertEquals(Filter.ReturnCode.SEEK_NEXT_USING_HINT, 
 filter.filterKeyValue(null));
 }
 }
 {code}
 It seems to happen on 2.0.0-SNAPSHOT too, but I wasn't able to link one with 
 included class.
 I have played some time with algorithm, and found that quick fix may be 
 applied to getNextRangeIndex(byte[] rowKey) method (hbase-client:1.1.0) :
 {code}
 if (insertionPosition == 0  
 !rangeList.get(insertionPosition).contains(rowKey)) {
 return ROW_BEFORE_FIRST_RANGE;
 }
 // FIX START
 if(!this.initialized) {
 this.initialized = true;
 }
 // FIX END
 return insertionPosition;
 {code} 
 Thanks, hope it will help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14207) Region was hijacked and remained in transition when RS failed to open a region and later regionplan changed to new RS on retry

2015-08-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681857#comment-14681857
 ] 

Ted Yu commented on HBASE-14207:


Are you going to attach a patch ?

 Region was hijacked and remained in transition when RS failed to open a 
 region and later regionplan changed to new RS on retry
 --

 Key: HBASE-14207
 URL: https://issues.apache.org/jira/browse/HBASE-14207
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.98.6
Reporter: Pankaj Kumar
Assignee: Pankaj Kumar
Priority: Critical

 On production environment, following events happened
 1. Master is trying to assign a region to RS, but due to 
 KeeperException$SessionExpiredException RS failed to open the region.
   In RS log, saw multiple WARN log related to 
 KeeperException$SessionExpiredException 
KeeperErrorCode = Session expired for 
 /hbase/region-in-transition/08f1935d652e5dbdac09b423b8f9401b
Unable to get data of znode 
 /hbase/region-in-transition/08f1935d652e5dbdac09b423b8f9401b
 2. Master retried to assign the region to same RS, but RS again failed.
 3. On second retry new plan formed and this time plan destination (RS) is 
 different, so master send the request to new RS to open the region. But new 
 RS failed to open the region as there was server mismatch in ZNODE than the  
 expected current server name. 
 Logs Snippet:
 {noformat}
 HM
 2015-07-14 03:50:29,759 | INFO  | master:T101PC03VM13:21300 | Processing 
 08f1935d652e5dbdac09b423b8f9401b in state: M_ZK_REGION_OFFLINE | 
 org.apache.hadoop.hbase.master.AssignmentManager.processRegionsInTransition(AssignmentManager.java:644)
 2015-07-14 03:50:29,759 | INFO  | master:T101PC03VM13:21300 | Transitioned 
 {08f1935d652e5dbdac09b423b8f9401b state=OFFLINE, ts=1436817029679, 
 server=null} to {08f1935d652e5dbdac09b423b8f9401b state=PENDING_OPEN, 
 ts=1436817029759, server=T101PC03VM13,21302,1436816690692} | 
 org.apache.hadoop.hbase.master.RegionStates.updateRegionState(RegionStates.java:327)
 2015-07-14 03:50:29,760 | INFO  | master:T101PC03VM13:21300 | Processed 
 region 08f1935d652e5dbdac09b423b8f9401b in state M_ZK_REGION_OFFLINE, on 
 server: T101PC03VM13,21302,1436816690692 | 
 org.apache.hadoop.hbase.master.AssignmentManager.processRegionsInTransition(AssignmentManager.java:768)
 2015-07-14 03:50:29,800 | INFO  | 
 MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Assigning 
 INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
 T101PC03VM13,21302,1436816690692 | 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1983)
 2015-07-14 03:50:29,801 | WARN  | 
 MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Failed assignment of 
 INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
 T101PC03VM13,21302,1436816690692, trying to assign elsewhere instead; try=1 
 of 10 | 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2077)
 2015-07-14 03:50:29,802 | INFO  | 
 MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Trying to re-assign 
 INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
 the same failed server. | 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2123)
 2015-07-14 03:50:31,804 | INFO  | 
 MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Assigning 
 INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
 T101PC03VM13,21302,1436816690692 | 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1983)
 2015-07-14 03:50:31,806 | WARN  | 
 MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Failed assignment of 
 INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
 T101PC03VM13,21302,1436816690692, trying to assign elsewhere instead; try=2 
 of 10 | 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2077)
 2015-07-14 03:50:31,807 | INFO  | 
 MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Transitioned 
 {08f1935d652e5dbdac09b423b8f9401b state=PENDING_OPEN, ts=1436817031804, 
 server=T101PC03VM13,21302,1436816690692} to {08f1935d652e5dbdac09b423b8f9401b 
 state=OFFLINE, ts=1436817031807, server=T101PC03VM13,21302,1436816690692} | 
 org.apache.hadoop.hbase.master.RegionStates.updateRegionState(RegionStates.java:327)
 2015-07-14 03:50:31,807 | INFO  | 
 MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Assigning 
 INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to 
 T101PC03VM14,21302,1436816997967 | 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1983)
 2015-07-14 

[jira] [Commented] (HBASE-13062) Add documentation coverage for configuring dns server with thrift and rest gateways

2015-08-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681833#comment-14681833
 ] 

Hudson commented on HBASE-13062:


FAILURE: Integrated in HBase-TRUNK #6713 (See 
[https://builds.apache.org/job/HBase-TRUNK/6713/])
HBASE-13062 Add documentation coverage for configuring dns server with thrift 
and rest gateways (mstanleyjones: rev 7d4de20cafd6b765bd5f33df72fc0e630d1731f7)
* src/main/asciidoc/_chapters/security.adoc


 Add documentation coverage for configuring dns server with thrift and rest 
 gateways
 ---

 Key: HBASE-13062
 URL: https://issues.apache.org/jira/browse/HBASE-13062
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: Srikanth Srungarapu
Assignee: Misty Stanley-Jones
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-13062-v1.patch, HBASE-13062.patch


 Currently, the documentation doesn't cover about configuring DNS with thrift 
 or rest gateways, though code base does provide provision for doing so. The 
 following parameters are being used for accomplishing the same.
 For REST:
 * hbase.rest.dns.interface
 * hbase.rest.dns.nameserver
 For Thrift:
 * hbase.thrift.dns.interface
 * hbase.thrift.dns.nameserver



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14166) Per-Region metrics can be stale

2015-08-11 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14166:
--
Attachment: HBASE-14166-v3.patch

Checkstyle fixes.

 Per-Region metrics can be stale
 ---

 Key: HBASE-14166
 URL: https://issues.apache.org/jira/browse/HBASE-14166
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0.1
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-14166-v1.patch, HBASE-14166-v2.patch, 
 HBASE-14166-v3.patch, HBASE-14166.patch


 We're seeing some machines that are reporting only old region metrics. It 
 seems like at some point the Hadoop metrics system decided which metrics to 
 display and which not to. From then on it was not changing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13889) Fix hbase-shaded-client artifact so it works on hbase-downstreamer

2015-08-11 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-13889:
--
Status: Patch Available  (was: Open)

 Fix hbase-shaded-client artifact so it works on hbase-downstreamer
 --

 Key: HBASE-13889
 URL: https://issues.apache.org/jira/browse/HBASE-13889
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 1.1.0.1, 1.1.0
 Environment: N/A?
Reporter: Dmitry Minkovsky
Assignee: Elliott Clark
Priority: Critical
 Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.3

 Attachments: 13889.wip.patch, HBASE-13889.patch, Screen Shot 
 2015-06-11 at 10.59.55 AM.png


 The {{hbase-shaded-client}} artifact was introduced in 
 [HBASE-13517|https://issues.apache.org/jira/browse/HBASE-13517]. Thank you 
 very much for this, as I am new to Java building and was having a very 
 slow-moving time resolving conflicts. However, the shaded client artifact 
 seems to be missing {{javax.xml.transform.TransformerException}}.  I examined 
 the JAR, which does not have this package/class.
 Steps to reproduce:
 Java: 
 {code}
 package com.mycompany.app;
   
   
   
   
   
 import org.apache.hadoop.conf.Configuration;  
   
   
 import org.apache.hadoop.hbase.HBaseConfiguration;
   
   
 import org.apache.hadoop.hbase.client.Connection; 
   
   
 import org.apache.hadoop.hbase.client.ConnectionFactory;  
   
   
   
   
   
 public class App {
   

 public static void main( String[] args ) throws java.io.IOException { 
   
   
 
 Configuration config = HBaseConfiguration.create();   
   
   
 Connection connection = ConnectionFactory.createConnection(config);   
   
   
 } 
   
   
 }
 {code}
 POM:
 {code}
 project xmlns=http://maven.apache.org/POM/4.0.0; 
 xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; 
  
   xsi:schemaLocation=http://maven.apache.org/POM/4.0.0 
 http://maven.apache.org/xsd/maven-4.0.0.xsd; 
 
   modelVersion4.0.0/modelVersion  
   
   
   
   
   
   groupIdcom.mycompany.app/groupId
   
   
   artifactIdmy-app/artifactId 
   
   
   version1.0-SNAPSHOT/version 
   
   
   packagingjar/packaging  

[jira] [Updated] (HBASE-14206) MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges

2015-08-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14206:
---
Attachment: 14206-v1.txt

 MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges
 ---

 Key: HBASE-14206
 URL: https://issues.apache.org/jira/browse/HBASE-14206
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0
 Environment: linux, java7
Reporter: Anton Nazaruk
Assignee: Anton Nazaruk
Priority: Critical
  Labels: filter
 Attachments: 14206-test.patch, 14206-v1.txt


 I haven't found a way to attach test program to JIRA issue, so put it below :
 {code}
 public class MultiRowRangeFilterTest {
  
 byte[] key1Start = new byte[] {-3};
 byte[] key1End  = new byte[] {-2};
 byte[] key2Start = new byte[] {5};
 byte[] key2End  = new byte[] {6};
 byte[] badKey = new byte[] {-10};
 @Test
 public void testRanges() throws IOException {
 MultiRowRangeFilter filter = new MultiRowRangeFilter(Arrays.asList(
 new MultiRowRangeFilter.RowRange(key1Start, true, key1End, 
 false),
 new MultiRowRangeFilter.RowRange(key2Start, true, key2End, 
 false)
 ));
 filter.filterRowKey(badKey, 0, 1);
 /*
 * FAILS -- includes BAD key!
 * Expected :SEEK_NEXT_USING_HINT
 * Actual   :INCLUDE
 * */
 assertEquals(Filter.ReturnCode.SEEK_NEXT_USING_HINT, 
 filter.filterKeyValue(null));
 }
 }
 {code}
 It seems to happen on 2.0.0-SNAPSHOT too, but I wasn't able to link one with 
 included class.
 I have played some time with algorithm, and found that quick fix may be 
 applied to getNextRangeIndex(byte[] rowKey) method (hbase-client:1.1.0) :
 {code}
 if (insertionPosition == 0  
 !rangeList.get(insertionPosition).contains(rowKey)) {
 return ROW_BEFORE_FIRST_RANGE;
 }
 // FIX START
 if(!this.initialized) {
 this.initialized = true;
 }
 // FIX END
 return insertionPosition;
 {code} 
 Thanks, hope it will help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14206) MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges

2015-08-11 Thread Anton Nazaruk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681882#comment-14681882
 ] 

Anton Nazaruk commented on HBASE-14206:
---

[~tedyu], what do I have to do with this issue (you've assigned it to me)? I am 
not hbase committer. If there is a reference of your jira process and issues 
states - please, share it with me.

 MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges
 ---

 Key: HBASE-14206
 URL: https://issues.apache.org/jira/browse/HBASE-14206
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0
 Environment: linux, java7
Reporter: Anton Nazaruk
Assignee: Anton Nazaruk
Priority: Critical
  Labels: filter
 Attachments: 14206-test.patch, 14206-v1.txt


 I haven't found a way to attach test program to JIRA issue, so put it below :
 {code}
 public class MultiRowRangeFilterTest {
  
 byte[] key1Start = new byte[] {-3};
 byte[] key1End  = new byte[] {-2};
 byte[] key2Start = new byte[] {5};
 byte[] key2End  = new byte[] {6};
 byte[] badKey = new byte[] {-10};
 @Test
 public void testRanges() throws IOException {
 MultiRowRangeFilter filter = new MultiRowRangeFilter(Arrays.asList(
 new MultiRowRangeFilter.RowRange(key1Start, true, key1End, 
 false),
 new MultiRowRangeFilter.RowRange(key2Start, true, key2End, 
 false)
 ));
 filter.filterRowKey(badKey, 0, 1);
 /*
 * FAILS -- includes BAD key!
 * Expected :SEEK_NEXT_USING_HINT
 * Actual   :INCLUDE
 * */
 assertEquals(Filter.ReturnCode.SEEK_NEXT_USING_HINT, 
 filter.filterKeyValue(null));
 }
 }
 {code}
 It seems to happen on 2.0.0-SNAPSHOT too, but I wasn't able to link one with 
 included class.
 I have played some time with algorithm, and found that quick fix may be 
 applied to getNextRangeIndex(byte[] rowKey) method (hbase-client:1.1.0) :
 {code}
 if (insertionPosition == 0  
 !rangeList.get(insertionPosition).contains(rowKey)) {
 return ROW_BEFORE_FIRST_RANGE;
 }
 // FIX START
 if(!this.initialized) {
 this.initialized = true;
 }
 // FIX END
 return insertionPosition;
 {code} 
 Thanks, hope it will help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14202) Reduce garbage we create

2015-08-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681946#comment-14681946
 ] 

Hadoop QA commented on HBASE-14202:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12749758/HBASE-14202.patch
  against master branch at commit 7d4de20cafd6b765bd5f33df72fc0e630d1731f7.
  ATTACHMENT ID: 12749758

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 12 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15042//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15042//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15042//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15042//console

This message is automatically generated.

 Reduce garbage we create
 

 Key: HBASE-14202
 URL: https://issues.apache.org/jira/browse/HBASE-14202
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: 2.0.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-14202.patch


 2 optimizations wrt no# short living objects we create
 1. IOEngine#read call to read from L2 cache is always creating a Pair object 
 to return the BB and MemoryType. We can avoid this by making the read API to 
 return a Cacheable. Pass the CacheableDeserializer, to be used, also to the 
 read API. Setter for MemoryType is already there in Cacheable interface.
 2. ByteBuff#asSubByteBuffer(int, int, Pair)  avoids Pair object creation 
 every time as we pass the shared Pair object. Still as pair can take only 
 Objects, the primitive int has to be boxed into an Integer object every time. 
 This can be avoided by creating a new Pair type which is a pair of an Object 
 and a primitive int.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14150) Add BulkLoad functionality to HBase-Spark Module

2015-08-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14682016#comment-14682016
 ] 

Ted Yu commented on HBASE-14150:


Have you seen Andrew's comment on the review board ?

 Add BulkLoad functionality to HBase-Spark Module
 

 Key: HBASE-14150
 URL: https://issues.apache.org/jira/browse/HBASE-14150
 Project: HBase
  Issue Type: New Feature
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
 Fix For: 2.0.0

 Attachments: HBASE-14150.1.patch, HBASE-14150.2.patch, 
 HBASE-14150.3.patch, HBASE-14150.4.patch


 Add on to the work done in HBASE-13992 to add functionality to do a bulk load 
 from a given RDD.
 This will do the following:
 1. figure out the number of regions and sort and partition the data correctly 
 to be written out to HFiles
 2. Also unlike the MR bulkload I would like that the columns to be sorted in 
 the shuffle stage and not in the memory of the reducer.  This will allow this 
 design to support super wide records with out going out of memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14150) Add BulkLoad functionality to HBase-Spark Module

2015-08-11 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14682059#comment-14682059
 ] 

Ted Malaska commented on HBASE-14150:
-

Cool, just reviewed.  I will try to get another patch in the next couple of 
days.

 Add BulkLoad functionality to HBase-Spark Module
 

 Key: HBASE-14150
 URL: https://issues.apache.org/jira/browse/HBASE-14150
 Project: HBase
  Issue Type: New Feature
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
 Fix For: 2.0.0

 Attachments: HBASE-14150.1.patch, HBASE-14150.2.patch, 
 HBASE-14150.3.patch, HBASE-14150.4.patch


 Add on to the work done in HBASE-13992 to add functionality to do a bulk load 
 from a given RDD.
 This will do the following:
 1. figure out the number of regions and sort and partition the data correctly 
 to be written out to HFiles
 2. Also unlike the MR bulkload I would like that the columns to be sorted in 
 the shuffle stage and not in the memory of the reducer.  This will allow this 
 design to support super wide records with out going out of memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14181) Add Spark DataFrame DataSource to HBase-Spark Module

2015-08-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14181:
---
Status: Patch Available  (was: Open)

 Add Spark DataFrame DataSource to HBase-Spark Module
 

 Key: HBASE-14181
 URL: https://issues.apache.org/jira/browse/HBASE-14181
 Project: HBase
  Issue Type: New Feature
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
Priority: Minor
 Attachments: HBASE-14181.1.patch, HBASE-14181.2.patch, 
 HBASE-14181.3.patch


 Build a RelationProvider for HBase-Spark Module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13376) Improvements to Stochastic load balancer

2015-08-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14682038#comment-14682038
 ] 

Ted Yu commented on HBASE-13376:


But in test output, looks like TestStochasticLoadBalancer2 didn't complete.

 Improvements to Stochastic load balancer
 

 Key: HBASE-13376
 URL: https://issues.apache.org/jira/browse/HBASE-13376
 Project: HBase
  Issue Type: Improvement
  Components: Balancer
Affects Versions: 1.0.0, 0.98.12
Reporter: Vandana Ayyalasomayajula
Assignee: Vandana Ayyalasomayajula
Priority: Minor
 Attachments: 13376-v2.txt, HBASE-13376.patch, HBASE-13376_0.98.txt, 
 HBASE-13376_0.txt, HBASE-13376_1.txt, HBASE-13376_1_1.txt, 
 HBASE-13376_2.patch, HBASE-13376_2_branch-1.patch, HBASE-13376_3.patch, 
 HBASE-13376_98.patch, HBASE-13376_branch-1.patch


 There are two things this jira tries to address:
 1. The locality picker in the stochastic balancer does not pick regions with 
 least locality as candidates for swap/move. So when any user configures 
 locality cost in the configs, the balancer does not always seems to move 
 regions with bad locality. 
 2. When a cluster has equal number of loaded regions, it always picks the 
 first one. It should pick a random region on one of the equally loaded 
 servers. This improves a chance of finding a good candidate, when load picker 
 is invoked several times. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14150) Add BulkLoad functionality to HBase-Spark Module

2015-08-11 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14682058#comment-14682058
 ] 

Ted Malaska commented on HBASE-14150:
-

Sorry missed those, looking now.  Thanks

 Add BulkLoad functionality to HBase-Spark Module
 

 Key: HBASE-14150
 URL: https://issues.apache.org/jira/browse/HBASE-14150
 Project: HBase
  Issue Type: New Feature
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
 Fix For: 2.0.0

 Attachments: HBASE-14150.1.patch, HBASE-14150.2.patch, 
 HBASE-14150.3.patch, HBASE-14150.4.patch


 Add on to the work done in HBASE-13992 to add functionality to do a bulk load 
 from a given RDD.
 This will do the following:
 1. figure out the number of regions and sort and partition the data correctly 
 to be written out to HFiles
 2. Also unlike the MR bulkload I would like that the columns to be sorted in 
 the shuffle stage and not in the memory of the reducer.  This will allow this 
 design to support super wide records with out going out of memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5878) Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.

2015-08-11 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681980#comment-14681980
 ] 

Nick Dimiduk commented on HBASE-5878:
-

Thanks [~apurtell] and [~ashish singhi]!

 Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.
 ---

 Key: HBASE-5878
 URL: https://issues.apache.org/jira/browse/HBASE-5878
 Project: HBase
  Issue Type: Bug
  Components: wal
Reporter: Uma Maheswara Rao G
Assignee: Ashish Singhi
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0

 Attachments: HBASE-5878-branch-1.0.patch, HBASE-5878-v2.patch, 
 HBASE-5878-v3.patch, HBASE-5878-v4.patch, HBASE-5878-v5-0.98.patch, 
 HBASE-5878-v5.patch, HBASE-5878-v5.patch, HBASE-5878-v6-0.98.patch, 
 HBASE-5878-v6.patch, HBASE-5878-v7-0.98.patch, HBASE-5878.patch


 SequencFileLogReader: 
 Currently Hbase using getFileLength api from DFSInputStream class by 
 reflection. DFSInputStream is not exposed as public. So, this may change in 
 future. Now HDFS exposed HdfsDataInputStream as public API.
 We can make use of it, when we are not able to find the getFileLength api 
 from DFSInputStream as a else condition. So, that we will not have any sudden 
 surprise like we are facing today.
 Also,  it is just logging one warn message and proceeding if it throws any 
 exception while getting the length. I think we can re-throw the exception 
 because there is no point in continuing with dataloss.
 {code}
 long adjust = 0;
   try {
 Field fIn = FilterInputStream.class.getDeclaredField(in);
 fIn.setAccessible(true);
 Object realIn = fIn.get(this.in);
 // In hadoop 0.22, DFSInputStream is a standalone class.  Before 
 this,
 // it was an inner class of DFSClient.
 if (realIn.getClass().getName().endsWith(DFSInputStream)) {
   Method getFileLength = realIn.getClass().
 getDeclaredMethod(getFileLength, new Class? []{});
   getFileLength.setAccessible(true);
   long realLength = ((Long)getFileLength.
 invoke(realIn, new Object []{})).longValue();
   assert(realLength = this.length);
   adjust = realLength - this.length;
 } else {
   LOG.info(Input stream class:  + realIn.getClass().getName() +
   , not adjusting length);
 }
   } catch(Exception e) {
 SequenceFileLogReader.LOG.warn(
   Error while trying to get accurate file length.   +
   Truncation / data loss may occur if RegionServers die., e);
   }
   return adjust + super.getPos();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14208) Remove yarn dependencies on -common and -client

2015-08-11 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14208:
--
Fix Version/s: (was: 1.3.0)
   2.0.0

 Remove yarn dependencies on -common and -client
 ---

 Key: HBASE-14208
 URL: https://issues.apache.org/jira/browse/HBASE-14208
 Project: HBase
  Issue Type: Bug
  Components: build, Client
Affects Versions: 2.0.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0

 Attachments: HBASE-14208.patch


 They aren't really needed since MR can't be used without server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14208) Remove yarn dependencies on -common and -client

2015-08-11 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14208:
--
Affects Version/s: (was: 1.3.0)

 Remove yarn dependencies on -common and -client
 ---

 Key: HBASE-14208
 URL: https://issues.apache.org/jira/browse/HBASE-14208
 Project: HBase
  Issue Type: Bug
  Components: build, Client
Affects Versions: 2.0.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0

 Attachments: HBASE-14208.patch


 They aren't really needed since MR can't be used without server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14208) Remove yarn dependencies on -common and -client

2015-08-11 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14208:
--
Status: Patch Available  (was: Open)

 Remove yarn dependencies on -common and -client
 ---

 Key: HBASE-14208
 URL: https://issues.apache.org/jira/browse/HBASE-14208
 Project: HBase
  Issue Type: Bug
  Components: build, Client
Affects Versions: 2.0.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0

 Attachments: HBASE-14208.patch


 They aren't really needed since MR can't be used without server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14208) Remove yarn dependencies on -common and -client

2015-08-11 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14682010#comment-14682010
 ] 

Elliott Clark commented on HBASE-14208:
---

Removing some deprecated methods was necessary. They were deprecated in the end 
of 0.98 the beginning of 1.0.0 so they can't be removed in  branch-1. This will 
be master only :-/

 Remove yarn dependencies on -common and -client
 ---

 Key: HBASE-14208
 URL: https://issues.apache.org/jira/browse/HBASE-14208
 Project: HBase
  Issue Type: Bug
  Components: build, Client
Affects Versions: 2.0.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0

 Attachments: HBASE-14208.patch


 They aren't really needed since MR can't be used without server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14181) Add Spark DataFrame DataSource to HBase-Spark Module

2015-08-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14682012#comment-14682012
 ] 

Ted Yu commented on HBASE-14181:


Understood.

Just wanted to see if tests pass.

 Add Spark DataFrame DataSource to HBase-Spark Module
 

 Key: HBASE-14181
 URL: https://issues.apache.org/jira/browse/HBASE-14181
 Project: HBase
  Issue Type: New Feature
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
Priority: Minor
 Attachments: HBASE-14181.1.patch, HBASE-14181.2.patch, 
 HBASE-14181.3.patch


 Build a RelationProvider for HBase-Spark Module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14202) Reduce garbage we create

2015-08-11 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-14202:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

 Reduce garbage we create
 

 Key: HBASE-14202
 URL: https://issues.apache.org/jira/browse/HBASE-14202
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: 2.0.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-14202.patch


 2 optimizations wrt no# short living objects we create
 1. IOEngine#read call to read from L2 cache is always creating a Pair object 
 to return the BB and MemoryType. We can avoid this by making the read API to 
 return a Cacheable. Pass the CacheableDeserializer, to be used, also to the 
 read API. Setter for MemoryType is already there in Cacheable interface.
 2. ByteBuff#asSubByteBuffer(int, int, Pair)  avoids Pair object creation 
 every time as we pass the shared Pair object. Still as pair can take only 
 Objects, the primitive int has to be boxed into an Integer object every time. 
 This can be avoided by creating a new Pair type which is a pair of an Object 
 and a primitive int.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14206) MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges

2015-08-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14206:
---
Fix Version/s: 1.3.0
   1.1.2
   1.2.0
   1.0.2
   0.98.14
   2.0.0

 MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges
 ---

 Key: HBASE-14206
 URL: https://issues.apache.org/jira/browse/HBASE-14206
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0
 Environment: linux, java7
Reporter: Anton Nazaruk
Assignee: Anton Nazaruk
Priority: Critical
  Labels: filter
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0

 Attachments: 14206-test.patch, 14206-v1.txt


 I haven't found a way to attach test program to JIRA issue, so put it below :
 {code}
 public class MultiRowRangeFilterTest {
  
 byte[] key1Start = new byte[] {-3};
 byte[] key1End  = new byte[] {-2};
 byte[] key2Start = new byte[] {5};
 byte[] key2End  = new byte[] {6};
 byte[] badKey = new byte[] {-10};
 @Test
 public void testRanges() throws IOException {
 MultiRowRangeFilter filter = new MultiRowRangeFilter(Arrays.asList(
 new MultiRowRangeFilter.RowRange(key1Start, true, key1End, 
 false),
 new MultiRowRangeFilter.RowRange(key2Start, true, key2End, 
 false)
 ));
 filter.filterRowKey(badKey, 0, 1);
 /*
 * FAILS -- includes BAD key!
 * Expected :SEEK_NEXT_USING_HINT
 * Actual   :INCLUDE
 * */
 assertEquals(Filter.ReturnCode.SEEK_NEXT_USING_HINT, 
 filter.filterKeyValue(null));
 }
 }
 {code}
 It seems to happen on 2.0.0-SNAPSHOT too, but I wasn't able to link one with 
 included class.
 I have played some time with algorithm, and found that quick fix may be 
 applied to getNextRangeIndex(byte[] rowKey) method (hbase-client:1.1.0) :
 {code}
 if (insertionPosition == 0  
 !rangeList.get(insertionPosition).contains(rowKey)) {
 return ROW_BEFORE_FIRST_RANGE;
 }
 // FIX START
 if(!this.initialized) {
 this.initialized = true;
 }
 // FIX END
 return insertionPosition;
 {code} 
 Thanks, hope it will help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14098) Allow dropping caches behind compactions

2015-08-11 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14098:
--
Attachment: HBASE-14098-v7.patch

Whoops missed the mob file stuff being there.

 Allow dropping caches behind compactions
 

 Key: HBASE-14098
 URL: https://issues.apache.org/jira/browse/HBASE-14098
 Project: HBase
  Issue Type: Bug
  Components: Compaction, hadoop2, HFile
Affects Versions: 2.0.0, 1.3.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-14098-v1.patch, HBASE-14098-v2.patch, 
 HBASE-14098-v3.patch, HBASE-14098-v4.patch, HBASE-14098-v5.patch, 
 HBASE-14098-v6.patch, HBASE-14098-v7.patch, HBASE-14098.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >