[jira] [Commented] (HBASE-14477) Compaction improvements: Date tiered compaction policy

2015-12-09 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048334#comment-15048334
 ] 

Anoop Sam John commented on HBASE-14477:


Are you working on this now V?

> Compaction improvements: Date tiered compaction policy
> --
>
> Key: HBASE-14477
> URL: https://issues.apache.org/jira/browse/HBASE-14477
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
>
> For immutable and mostly immutable data the current SizeTiered-based 
> compaction policy is not efficient. 
> # There is no need to compact all files into one, because, data is (mostly) 
> immutable and we do not need to collect garbage. (performance reason will be 
> discussed later)
> # Size-tiered compaction is not suitable for applications where most recent 
> data is most important and prevents efficient caching of this data. 
> The idea  is pretty similar to DateTieredCompaction in Cassandra:
> http://www.datastax.com/dev/blog/datetieredcompactionstrategy
> http://www.datastax.com/dev/blog/dtcs-notes-from-the-field
> From Cassandra own blog:
> {quote}
> Since DTCS can be used with any table, it is important to know when it is a 
> good idea, and when it is not. I’ll try to explain the spectrum and 
> trade-offs here:
> 1. Perfect Fit: Time Series Fact Data, Deletes by Default TTL: When you 
> ingest fact data that is ordered in time, with no deletes or overwrites. This 
> is the standard “time series” use case.
> 2. OK Fit: Time-Ordered, with limited updates across whole data set, or only 
> updates to recent data: When you ingest data that is (mostly) ordered in 
> time, but revise or delete a very small proportion of the overall data across 
> the whole timeline.
> 3. Not a Good Fit: many partial row updates or deletions over time: When you 
> need to partially revise or delete fields for rows that you read together. 
> Also, when you revise or delete rows within clustered reads.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14869) Better request latency and size histograms

2015-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048333#comment-15048333
 ] 

Hudson commented on HBASE-14869:


SUCCESS: Integrated in HBase-Trunk_matrix #541 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/541/])
HBASE-14869 Better request latency and size histograms. (Vikas (larsh: rev 
7bfbb6a3c9af4b0e2853b5ea2580a05bb471211b)
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/MetricsMasterFilesystemSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableRangeHistogram.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/MetricsSnapshotSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableTimeHistogram.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/thrift/MetricsThriftServerSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/wal/MetricsEditsReplaySourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSourceImpl.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerMetrics.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableHistogram.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableSizeHistogram.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/wal/MetricsWALSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/DynamicMetricsRegistry.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/MetricsAssignmentManagerSourceImpl.java
* 
hbase-hadoop-compat/src/test/java/org/apache/hadoop/hbase/test/MetricsAssertHelper.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceImpl.java
* 
hbase-hadoop2-compat/src/test/java/org/apache/hadoop/hbase/test/MetricsAssertHelperImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/balancer/MetricsBalancerSourceImpl.java


> Better request latency and size histograms
> --
>
> Key: HBASE-14869
> URL: https://issues.apache.org/jira/browse/HBASE-14869
> Project: HBase
>  Issue Type: Brainstorming
>Reporter: Lars Hofhansl
>Assignee: Vikas Vishwakarma
> Fix For: 2.0.0, 1.3.0, 0.98.17
>
> Attachments: 14869-test-0.98.txt, 14869-v1-0.98.txt, 
> 14869-v1-2.0.txt, 14869-v2-0.98.txt, 14869-v2-2.0.txt, 14869-v3-0.98.txt, 
> 14869-v4-0.98.txt, 14869-v5-0.98.txt, 14869-v6-0.98.txt, AppendSizeTime.png, 
> Get.png
>
>
> I just discussed this with a colleague.
> The get, put, etc, histograms that each region server keeps are somewhat 
> useless (depending on what you want to achieve of course), as they are 
> aggregated and calculated by each region server.
> It would be better to record the number of requests in certainly latency 
> bands in addition to what we do now.
> For example the number of gets that took 0-5ms, 6-10ms, 10-20ms, 20-50ms, 
> 50-100ms, 100-1000ms, > 1000ms, etc. (just as an example, should be 
> configurable).
> That way we can do further calculations after the fact, and answer questions 
> like: How often did we miss our SLA? Percentage of requests that missed an 
> SLA, etc.
> Comments?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-7171) Initial web UI for region/memstore/storefiles details

2015-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048250#comment-15048250
 ] 

Hudson commented on HBASE-7171:
---

FAILURE: Integrated in HBase-1.2 #431 (See 
[https://builds.apache.org/job/HBase-1.2/431/])
HBASE-7171 Initial web UI for region/memstore/storefiles details (antonov: rev 
ded97582063ca16ccc5d0ab4cd93ae7afa66bdad)
* hbase-server/src/main/resources/hbase-webapps/regionserver/region.jsp
* 
hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/regionserver/RegionListTmpl.jamon
* hbase-server/src/main/resources/hbase-webapps/regionserver/storeFile.jsp


> Initial web UI for region/memstore/storefiles details
> -
>
> Key: HBASE-7171
> URL: https://issues.apache.org/jira/browse/HBASE-7171
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Reporter: stack
>Assignee: Mikhail Antonov
>  Labels: beginner
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-7171.patch, region_details.png, region_list.png, 
> storefile_details.png
>
>
> Click on a region in UI and get a listing of hfiles in HDFS and summary of 
> memstore content; click on an HFile and see its content



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14946) Don't allow multi's to over run the max result size.

2015-12-09 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048284#comment-15048284
 ] 

Elliott Clark commented on HBASE-14946:
---

The basic flow is like this.

I want to get a single column from lots of rows. So I create a list of gets. 
Then I send them to table.get(List). If the regions for that table are 
spread out then those requests get chunked out to all the region servers. No 
one regionserver gets too many. However if one region server contains lots of 
regions for that table then a multi action can contain lots of gets. No single 
get is too onerous. However the regionserver won't return until every get is 
complete. So if there are thousands of gets that are sent in one multi then the 
regionserver can retain lots of data in one thread.

> Don't allow multi's to over run the max result size.
> 
>
> Key: HBASE-14946
> URL: https://issues.apache.org/jira/browse/HBASE-14946
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Attachments: HBASE-14946-v1.patch, HBASE-14946-v2.patch, 
> HBASE-14946-v3.patch, HBASE-14946-v5.patch, HBASE-14946.patch
>
>
> If a user puts a list of tons of different gets into a table we will then 
> send them along in a multi. The server un-wraps each get in the multi. While 
> no single get may be over the size limit the total might be.
> We should protect the server from this. 
> We should batch up on the server side so each RPC is smaller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14954) IllegalArgumentException was thrown when doing online configuration change in CompactSplitThread

2015-12-09 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-14954:
---
Status: Patch Available  (was: Open)

> IllegalArgumentException was thrown when doing online configuration change in 
> CompactSplitThread
> 
>
> Key: HBASE-14954
> URL: https://issues.apache.org/jira/browse/HBASE-14954
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, regionserver
>Affects Versions: 1.1.2
>Reporter: Victor Xu
>Assignee: Victor Xu
> Attachments: HBASE-14954-v1.patch
>
>
> Online configuration change is a terrific feature for HBase administrators. 
> However, when we use this feature to tune compaction thread pool size online, 
> it triggered a IllegalArgumentException. The cause is the order of 
> setMaximumPoolSize() and setCorePoolSize() of ThreadPoolExecutor: when 
> turning parameters bigger, we should setMax first; when turning parameters 
> smaller, we need to setCore first. Besides, there is also a copy-code bug in 
> merge and split thread pool which I will fix together.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14946) Don't allow multi's to over run the max result size.

2015-12-09 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048276#comment-15048276
 ] 

Elliott Clark commented on HBASE-14946:
---

bq.It is not important that it be accurate?
Not 100% accurate. Just making sure to get an estimate of the size. If we're 
off by a byte here or there it's not a big deal.

bq.Volatile? Or it don't matter? Or one thread only?
One thread only. 

bq.throw new HBaseIOException("Response size would be too large");
Can do.

bq.So, we are going to break the client response? How do they get the full 
response back? Needs admin intervention?
The async process should retry all failed gets. Let me get a test to show that.

bq.Why does the scanner chunking not help here?
Multi actions won't contain scans. And we don't chunk on anything where is 
isGetScan is true.

> Don't allow multi's to over run the max result size.
> 
>
> Key: HBASE-14946
> URL: https://issues.apache.org/jira/browse/HBASE-14946
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Attachments: HBASE-14946-v1.patch, HBASE-14946-v2.patch, 
> HBASE-14946-v3.patch, HBASE-14946-v5.patch, HBASE-14946.patch
>
>
> If a user puts a list of tons of different gets into a table we will then 
> send them along in a multi. The server un-wraps each get in the multi. While 
> no single get may be over the size limit the total might be.
> We should protect the server from this. 
> We should batch up on the server side so each RPC is smaller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14869) Better request latency and size histograms

2015-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048305#comment-15048305
 ] 

Hudson commented on HBASE-14869:


SUCCESS: Integrated in HBase-1.1-JDK7 #1616 (See 
[https://builds.apache.org/job/HBase-1.1-JDK7/1616/])
HBASE-14869 Better request latency and size histograms. (Vikas (larsh: rev 
0ccdadfcd22a20f9e6b10ca0c5154411c99b517f)
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableSizeHistogram.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableTimeHistogram.java
* 
hbase-hadoop2-compat/src/test/java/org/apache/hadoop/hbase/test/MetricsAssertHelperImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/wal/MetricsWALSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/MetricsMasterFilesystemSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableHistogram.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/wal/MetricsEditsReplaySourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/thrift/MetricsThriftServerSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/MetricsSnapshotSourceImpl.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerMetrics.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/balancer/MetricsBalancerSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/DynamicMetricsRegistry.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableRangeHistogram.java
* 
hbase-hadoop-compat/src/test/java/org/apache/hadoop/hbase/test/MetricsAssertHelper.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/MetricsAssignmentManagerSourceImpl.java


> Better request latency and size histograms
> --
>
> Key: HBASE-14869
> URL: https://issues.apache.org/jira/browse/HBASE-14869
> Project: HBase
>  Issue Type: Brainstorming
>Reporter: Lars Hofhansl
>Assignee: Vikas Vishwakarma
> Fix For: 2.0.0, 1.3.0, 0.98.17
>
> Attachments: 14869-test-0.98.txt, 14869-v1-0.98.txt, 
> 14869-v1-2.0.txt, 14869-v2-0.98.txt, 14869-v2-2.0.txt, 14869-v3-0.98.txt, 
> 14869-v4-0.98.txt, 14869-v5-0.98.txt, 14869-v6-0.98.txt, AppendSizeTime.png, 
> Get.png
>
>
> I just discussed this with a colleague.
> The get, put, etc, histograms that each region server keeps are somewhat 
> useless (depending on what you want to achieve of course), as they are 
> aggregated and calculated by each region server.
> It would be better to record the number of requests in certainly latency 
> bands in addition to what we do now.
> For example the number of gets that took 0-5ms, 6-10ms, 10-20ms, 20-50ms, 
> 50-100ms, 100-1000ms, > 1000ms, etc. (just as an example, should be 
> configurable).
> That way we can do further calculations after the fact, and answer questions 
> like: How often did we miss our SLA? Percentage of requests that missed an 
> SLA, etc.
> Comments?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14915) Hanging test : org.apache.hadoop.hbase.mapreduce.TestImportExport

2015-12-09 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048348#comment-15048348
 ] 

Heng Chen commented on HBASE-14915:
---

I found something interesting.
Each failed test in TestImportExport always has a lot of logs like below,  this 
happens when we do import hdfs files into HTable.
{code}
2015-12-05 22:09:23,634 INFO  
[asf907.gq1.ygridcore.net,60842,1449352499962_ChoreService_1] 
regionserver.HRegionServer$PeriodicMemstoreFlusher(1585): 
asf907.gq1.ygridcore.net,60842,1449352499962-MemstoreFlusherChore requesting 
flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free 
WALs after random delay 5699ms
2015-12-05 22:09:24,729 INFO  
[asf907.gq1.ygridcore.net,60842,1449352499962_ChoreService_1] 
regionserver.HRegionServer$PeriodicMemstoreFlusher(1585): 
asf907.gq1.ygridcore.net,60842,1449352499962-MemstoreFlusherChore requesting 
flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free 
WALs after random delay 81344ms
{code}

Relates code in HRegion#shouldFlush
{code:title=HRegion.java}
  boolean shouldFlush(final StringBuffer whyFlush) {

for (Store s : getStores()) {
  if (s.timeOfOldestEdit() < now - modifiedFlushCheckInterval) {
// we have an old enough edit in the memstore, flush
whyFlush.append(s.toString() + " has an old edit so flush to free 
WALs");
return true;
  }
}
return false;
  }
{code}
As log shows, it seems that PeriodicMemstoreFlusher  send a lot of same 
FlushRequest to MemStoreFlusher,  and the request will store into one queue in 
MemStoreFlusher.
So i guess memory exceeds the limit of container, so we can see log 
{code}
2015-12-05 22:08:56,440 WARN  [ContainersLauncher #5] 
nodemanager.DefaultContainerExecutor(207): Exit code from container 
container_1449352527830_0013_01_02 is : 143
{code} 

Maybe we should avoid send same flush request or send it periodical,  thoughts? 
[~stack]


> Hanging test : org.apache.hadoop.hbase.mapreduce.TestImportExport
> -
>
> Key: HBASE-14915
> URL: https://issues.apache.org/jira/browse/HBASE-14915
> Project: HBase
>  Issue Type: Sub-task
>  Components: hangingTests
>Reporter: stack
> Attachments: HBASE-14915-branch-1.2.patch
>
>
> This test hangs a bunch:
> Here is latest:
> https://builds.apache.org/job/HBase-1.2/418/jdk=latest1.7,label=Hadoop/consoleText



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14869) Better request latency and size histograms

2015-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048249#comment-15048249
 ] 

Hudson commented on HBASE-14869:


FAILURE: Integrated in HBase-1.2 #431 (See 
[https://builds.apache.org/job/HBase-1.2/431/])
HBASE-14869 Better request latency and size histograms. (Vikas (larsh: rev 
2be6d40fa9589b51b50c4d0af273c94a14c0720a)
* 
hbase-hadoop-compat/src/test/java/org/apache/hadoop/hbase/test/MetricsAssertHelper.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/MetricsSnapshotSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableRangeHistogram.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/wal/MetricsEditsReplaySourceImpl.java
* 
hbase-hadoop2-compat/src/test/java/org/apache/hadoop/hbase/test/MetricsAssertHelperImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableHistogram.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/thrift/MetricsThriftServerSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/DynamicMetricsRegistry.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/MetricsAssignmentManagerSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/wal/MetricsWALSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/MetricsMasterFilesystemSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableTimeHistogram.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceImpl.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerMetrics.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/balancer/MetricsBalancerSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableSizeHistogram.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionSourceImpl.java


> Better request latency and size histograms
> --
>
> Key: HBASE-14869
> URL: https://issues.apache.org/jira/browse/HBASE-14869
> Project: HBase
>  Issue Type: Brainstorming
>Reporter: Lars Hofhansl
>Assignee: Vikas Vishwakarma
> Fix For: 2.0.0, 1.3.0, 0.98.17
>
> Attachments: 14869-test-0.98.txt, 14869-v1-0.98.txt, 
> 14869-v1-2.0.txt, 14869-v2-0.98.txt, 14869-v2-2.0.txt, 14869-v3-0.98.txt, 
> 14869-v4-0.98.txt, 14869-v5-0.98.txt, 14869-v6-0.98.txt, AppendSizeTime.png, 
> Get.png
>
>
> I just discussed this with a colleague.
> The get, put, etc, histograms that each region server keeps are somewhat 
> useless (depending on what you want to achieve of course), as they are 
> aggregated and calculated by each region server.
> It would be better to record the number of requests in certainly latency 
> bands in addition to what we do now.
> For example the number of gets that took 0-5ms, 6-10ms, 10-20ms, 20-50ms, 
> 50-100ms, 100-1000ms, > 1000ms, etc. (just as an example, should be 
> configurable).
> That way we can do further calculations after the fact, and answer questions 
> like: How often did we miss our SLA? Percentage of requests that missed an 
> SLA, etc.
> Comments?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14954) IllegalArgumentException was thrown when doing online configuration change in CompactSplitThread

2015-12-09 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048368#comment-15048368
 ] 

Anoop Sam John commented on HBASE-14954:


hmm that is silly copy paste mistake also..  patch LGTM.

> IllegalArgumentException was thrown when doing online configuration change in 
> CompactSplitThread
> 
>
> Key: HBASE-14954
> URL: https://issues.apache.org/jira/browse/HBASE-14954
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, regionserver
>Affects Versions: 1.1.2
>Reporter: Victor Xu
>Assignee: Victor Xu
> Attachments: HBASE-14954-v1.patch
>
>
> Online configuration change is a terrific feature for HBase administrators. 
> However, when we use this feature to tune compaction thread pool size online, 
> it triggered a IllegalArgumentException. The cause is the order of 
> setMaximumPoolSize() and setCorePoolSize() of ThreadPoolExecutor: when 
> turning parameters bigger, we should setMax first; when turning parameters 
> smaller, we need to setCore first. Besides, there is also a copy-code bug in 
> merge and split thread pool which I will fix together.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-7171) Initial web UI for region/memstore/storefiles details

2015-12-09 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048258#comment-15048258
 ] 

Mikhail Antonov commented on HBASE-7171:


(off topic - just noticed branch-2 came in after fetch; what's current policy 
regarding cherry picks, should any commit going to master go there as well? 
[~mbertozzi]?)

> Initial web UI for region/memstore/storefiles details
> -
>
> Key: HBASE-7171
> URL: https://issues.apache.org/jira/browse/HBASE-7171
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Reporter: stack
>Assignee: Mikhail Antonov
>  Labels: beginner
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-7171.patch, region_details.png, region_list.png, 
> storefile_details.png
>
>
> Click on a region in UI and get a listing of hfiles in HDFS and summary of 
> memstore content; click on an HFile and see its content



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14947) WALProcedureStore improvements

2015-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048267#comment-15048267
 ] 

Hadoop QA commented on HBASE-14947:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12776484/HBASE-14947-v1.patch
  against master branch at commit 7bfbb6a3c9af4b0e2853b5ea2580a05bb471211b.
  ATTACHMENT ID: 12776484

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16806//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16806//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16806//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16806//console

This message is automatically generated.

> WALProcedureStore improvements
> --
>
> Key: HBASE-14947
> URL: https://issues.apache.org/jira/browse/HBASE-14947
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Reporter: Ashu Pachauri
>Assignee: Matteo Bertozzi
>Priority: Minor
> Attachments: HBASE-14947-v0.patch, HBASE-14947-v1.patch
>
>
> We ended up with a deadlock in HBASE-14943, with the storeTracker and lock 
> acquired in reverse order by syncLoop() and insert/update/delete. In the 
> syncLoop() with don't need the lock when we try to roll or removeInactive. 
> also we can move the insert/update/delete tracker check in the syncLoop 
> avoiding to the extra lock operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-14915) Hanging test : org.apache.hadoop.hbase.mapreduce.TestImportExport

2015-12-09 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048348#comment-15048348
 ] 

Heng Chen edited comment on HBASE-14915 at 12/9/15 10:17 AM:
-

I found something interesting.
Each failed test in TestImportExport always has a lot of logs like below,  this 
happens when we do import hdfs files into HTable.
{code}
2015-12-05 22:09:23,634 INFO  
[asf907.gq1.ygridcore.net,60842,1449352499962_ChoreService_1] 
regionserver.HRegionServer$PeriodicMemstoreFlusher(1585): 
asf907.gq1.ygridcore.net,60842,1449352499962-MemstoreFlusherChore requesting 
flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free 
WALs after random delay 5699ms
2015-12-05 22:09:24,729 INFO  
[asf907.gq1.ygridcore.net,60842,1449352499962_ChoreService_1] 
regionserver.HRegionServer$PeriodicMemstoreFlusher(1585): 
asf907.gq1.ygridcore.net,60842,1449352499962-MemstoreFlusherChore requesting 
flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free 
WALs after random delay 81344ms
{code}

Relates code in HRegion#shouldFlush
{code:title=HRegion.java}
  boolean shouldFlush(final StringBuffer whyFlush) {

for (Store s : getStores()) {
  if (s.timeOfOldestEdit() < now - modifiedFlushCheckInterval) {
// we have an old enough edit in the memstore, flush
whyFlush.append(s.toString() + " has an old edit so flush to free 
WALs");
return true;
  }
}
return false;
  }
{code}
As log shows, it seems that PeriodicMemstoreFlusher  send a lot of same 
FlushRequest to MemStoreFlusher,  and the request will store into one queue in 
MemStoreFlusher.
So i guess memory exceeds the limit of container, so we can see log 
{code}
2015-12-05 22:08:56,440 WARN  [ContainersLauncher #5] 
nodemanager.DefaultContainerExecutor(207): Exit code from container 
container_1449352527830_0013_01_02 is : 143
{code} 

Maybe we should avoid send same flush request or send it periodical.
I still want to figure out the deeper reason why 'there is an old edit in 
memstore',  it seems that our flush failed. 

I also notice some logs after import start. It seems something error with the 
last block in hdfs. But i not sure about it.
{code}
2015-12-02 21:22:15,974 INFO  [IPC Server handler 1 on 44218] 
blockmanagement.BlockManager(2383): BLOCK* addStoredBlock: blockMap updated: 
127.0.0.1:59938 is added to 
blk_1073742196_1372{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[[DISK]DS-083a02af-1593-4938-9651-f812ac2cb91a:NORMAL|RBW]]}
 size 0
2015-12-02 21:22:16,032 INFO  [IPC Server handler 7 on 44218] 
blockmanagement.BlockManager(2383): BLOCK* addStoredBlock: blockMap updated: 
127.0.0.1:59938 is added to 
blk_1073742197_1373{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[[DISK]DS-b35bb0ca-6546-4f47-a8cf-239960b3356a:NORMAL|RBW]]}
 size 0
{code}









was (Author: chenheng):
I found something interesting.
Each failed test in TestImportExport always has a lot of logs like below,  this 
happens when we do import hdfs files into HTable.
{code}
2015-12-05 22:09:23,634 INFO  
[asf907.gq1.ygridcore.net,60842,1449352499962_ChoreService_1] 
regionserver.HRegionServer$PeriodicMemstoreFlusher(1585): 
asf907.gq1.ygridcore.net,60842,1449352499962-MemstoreFlusherChore requesting 
flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free 
WALs after random delay 5699ms
2015-12-05 22:09:24,729 INFO  
[asf907.gq1.ygridcore.net,60842,1449352499962_ChoreService_1] 
regionserver.HRegionServer$PeriodicMemstoreFlusher(1585): 
asf907.gq1.ygridcore.net,60842,1449352499962-MemstoreFlusherChore requesting 
flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free 
WALs after random delay 81344ms
{code}

Relates code in HRegion#shouldFlush
{code:title=HRegion.java}
  boolean shouldFlush(final StringBuffer whyFlush) {

for (Store s : getStores()) {
  if (s.timeOfOldestEdit() < now - modifiedFlushCheckInterval) {
// we have an old enough edit in the memstore, flush
whyFlush.append(s.toString() + " has an old edit so flush to free 
WALs");
return true;
  }
}
return false;
  }
{code}
As log shows, it seems that PeriodicMemstoreFlusher  send a lot of same 
FlushRequest to MemStoreFlusher,  and the request will store into one queue in 
MemStoreFlusher.
So i guess memory exceeds the limit of container, so we can see log 
{code}
2015-12-05 22:08:56,440 WARN  [ContainersLauncher #5] 
nodemanager.DefaultContainerExecutor(207): Exit code from container 
container_1449352527830_0013_01_02 is : 143
{code} 

Maybe we should avoid send same flush request or send it periodical,  thoughts? 
[~stack]


> Hanging test : org.apache.hadoop.hbase.mapreduce.TestImportExport
> 

[jira] [Commented] (HBASE-13082) Coarsen StoreScanner locks to RegionScanner

2015-12-09 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048506#comment-15048506
 ] 

ramkrishna.s.vasudevan commented on HBASE-13082:


Oops. I need to prepare a patch for 1.3 which I started but left it due to some 
conflicts. Let me complete it by the end of this week.

> Coarsen StoreScanner locks to RegionScanner
> ---
>
> Key: HBASE-13082
> URL: https://issues.apache.org/jira/browse/HBASE-13082
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: 13082-test.txt, 13082-v2.txt, 13082-v3.txt, 
> 13082-v4.txt, 13082.txt, 13082.txt, HBASE-13082.pdf, HBASE-13082_1.pdf, 
> HBASE-13082_12.patch, HBASE-13082_13.patch, HBASE-13082_14.patch, 
> HBASE-13082_15.patch, HBASE-13082_16.patch, HBASE-13082_17.patch, 
> HBASE-13082_18.patch, HBASE-13082_19.patch, HBASE-13082_1_WIP.patch, 
> HBASE-13082_2.pdf, HBASE-13082_2_WIP.patch, HBASE-13082_3.patch, 
> HBASE-13082_4.patch, HBASE-13082_9.patch, HBASE-13082_9.patch, 
> HBASE-13082_withoutpatch.jpg, HBASE-13082_withpatch.jpg, 
> LockVsSynchronized.java, gc.png, gc.png, gc.png, hits.png, next.png, next.png
>
>
> Continuing where HBASE-10015 left of.
> We can avoid locking (and memory fencing) inside StoreScanner by deferring to 
> the lock already held by the RegionScanner.
> In tests this shows quite a scan improvement and reduced CPU (the fences make 
> the cores wait for memory fetches).
> There are some drawbacks too:
> * All calls to RegionScanner need to be remain synchronized
> * Implementors of coprocessors need to be diligent in following the locking 
> contract. For example Phoenix does not lock RegionScanner.nextRaw() and 
> required in the documentation (not picking on Phoenix, this one is my fault 
> as I told them it's OK)
> * possible starving of flushes and compaction with heavy read load. 
> RegionScanner operations would keep getting the locks and the 
> flushes/compactions would not be able finalize the set of files.
> I'll have a patch soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13153) Bulk Loaded HFile Replication

2015-12-09 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-13153:
--
Attachment: HBASE-13153-v19.patch

Patch addressing Anoop's comments and concerns from RB.
Thanks for the review, Anoop. 

> Bulk Loaded HFile Replication
> -
>
> Key: HBASE-13153
> URL: https://issues.apache.org/jira/browse/HBASE-13153
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: sunhaitao
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-13153-branch-1-v18.patch, HBASE-13153-v1.patch, 
> HBASE-13153-v10.patch, HBASE-13153-v11.patch, HBASE-13153-v12.patch, 
> HBASE-13153-v13.patch, HBASE-13153-v14.patch, HBASE-13153-v15.patch, 
> HBASE-13153-v16.patch, HBASE-13153-v17.patch, HBASE-13153-v18.patch, 
> HBASE-13153-v19.patch, HBASE-13153-v2.patch, HBASE-13153-v3.patch, 
> HBASE-13153-v4.patch, HBASE-13153-v5.patch, HBASE-13153-v6.patch, 
> HBASE-13153-v7.patch, HBASE-13153-v8.patch, HBASE-13153-v9.patch, 
> HBASE-13153.patch, HBase Bulk Load Replication-v1-1.pdf, HBase Bulk Load 
> Replication-v2.pdf, HBase Bulk Load Replication-v3.pdf, HBase Bulk Load 
> Replication.pdf, HDFS_HA_Solution.PNG
>
>
> Currently we plan to use HBase Replication feature to deal with disaster 
> tolerance scenario.But we encounter an issue that we will use bulkload very 
> frequently,because bulkload bypass write path, and will not generate WAL, so 
> the data will not be replicated to backup cluster. It's inappropriate to 
> bukload twice both on active cluster and backup cluster. So i advise do some 
> modification to bulkload feature to enable bukload to both active cluster and 
> backup cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14957) issue in starting start-hbase.sh

2015-12-09 Thread gaurav kandpal (JIRA)
gaurav kandpal created HBASE-14957:
--

 Summary: issue in starting start-hbase.sh
 Key: HBASE-14957
 URL: https://issues.apache.org/jira/browse/HBASE-14957
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 0.94.9
Reporter: gaurav kandpal
Priority: Blocker
 Fix For: 0.94.9


for initializing nutch, after configuring nutch whent I am trying to kickoff 
start-hbase.sh I am getting the below error.

/cygdrive/c/users/gaurav.kandpal/desktop/test/hbase-0.94.9.tar/hbase-0.94.9/hbase-0.94.9/bin/../conf/hbase-env.sh:
 line 101: $'\r': command not found
/cygdrive/c/users/gaurav.kandpal/desktop/test/hbase-0.94.9.tar/hbase-0.94.9/hbase-0.94.9/bin/../conf/hbase-env.sh:
 line 104: $'\r': command not found
/cygdrive/c/users/gaurav.kandpal/desktop/test/hbase-0.94.9.tar/hbase-0.94.9/hbase-0.94.9/bin/../conf/hbase-env.sh:
 line 107: $'\r': command not found
/cygdrive/c/users/gaurav.kandpal/desktop/test/hbase-0.94.9.tar/hbase-0.94.9/hbase-0.94.9/bin/../conf/hbase-env.sh:
 line 110: $'\r': command not found
/cygdrive/c/users/gaurav.kandpal/desktop/test/hbase-0.94.9.tar/hbase-0.94.9/hbase-0.94.9/bin/../conf/hbase-env.sh:
 line 115: $'\r': command not found
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
'nrecognized VM option 'UseConcMarkSweepGC
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
'nrecognized VM option 'UseConcMarkSweepGC
starting master, logging to 
/cygdrive/c/users/gaurav.kandpal/desktop/test/hbase-0.94.9.tar/hbase-0.94.9/hbase-0.94.9/bin/../logs/hbase-Gaurav.Kandpal-master-gauravk.out
localhost: 
/cygdrive/c/users/gaurav.kandpal/desktop/test/hbase-0.94.9.tar/hbase-0.94.9/hbase-0.94.9/bin/regionservers.sh:
 line 64: ssh: command not found

reference URL is 

https://gist.github.com/xrstf/b48a970098a8e76943b9
 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14004) [Replication] Inconsistency between Memstore and WAL may result in data in remote cluster that is not in the origin

2015-12-09 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048552#comment-15048552
 ] 

Phil Yang commented on HBASE-14004:
---

You are right, we should change the logic of replicator.

And I am not an expert so have a question about the idempotent of HBase 
operation: What will happen if we replay an entry more than once? Considering 
these scenarios, the number is the seq ids:

1, 2, 3, 4, 5---this is normal order
1, 3, 2, 4, 5---the order is wrong but each log we only read once.
1, 1, 2, 3, 4, 5---we replay one entry twice but they are continuous
1, 2, 3, 1, 4, 5we replay one entry twice and they are not continuous
1, 2, 3, 1, 2, 3, 4, 5---the order is wrong but the subsequence is repeat so we 
make sure the order

Are they all wrong except the first? It seems that the last one is not wrong?

> [Replication] Inconsistency between Memstore and WAL may result in data in 
> remote cluster that is not in the origin
> ---
>
> Key: HBASE-14004
> URL: https://issues.apache.org/jira/browse/HBASE-14004
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: He Liangliang
>Priority: Critical
>  Labels: replication, wal
>
> Looks like the current write path can cause inconsistency between 
> memstore/hfile and WAL which cause the slave cluster has more data than the 
> master cluster.
> The simplified write path looks like:
> 1. insert record into Memstore
> 2. write record to WAL
> 3. sync WAL
> 4. rollback Memstore if 3 fails
> It's possible that the HDFS sync RPC call fails, but the data is already  
> (may partially) transported to the DNs which finally get persisted. As a 
> result, the handler will rollback the Memstore and the later flushed HFile 
> will also skip this record.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14895) Seek only to the newly flushed file on scanner reset on flush

2015-12-09 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14895:
---
Attachment: HBASE-14895_2.patch

Updated patch addressing comments in RB and also did some more changes by 
removing the nullifyHeap method because we no longer do that and some method 
and variable renamings.

> Seek only to the newly flushed file on scanner reset on flush
> -
>
> Key: HBASE-14895
> URL: https://issues.apache.org/jira/browse/HBASE-14895
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-14895.patch, HBASE-14895_1.patch, 
> HBASE-14895_1.patch, HBASE-14895_2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14895) Seek only to the newly flushed file on scanner reset on flush

2015-12-09 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14895:
---
Status: Patch Available  (was: Open)

> Seek only to the newly flushed file on scanner reset on flush
> -
>
> Key: HBASE-14895
> URL: https://issues.apache.org/jira/browse/HBASE-14895
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-14895.patch, HBASE-14895_1.patch, 
> HBASE-14895_1.patch, HBASE-14895_2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14954) IllegalArgumentException was thrown when doing online configuration change in CompactSplitThread

2015-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048652#comment-15048652
 ] 

Hadoop QA commented on HBASE-14954:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12776468/HBASE-14954-v1.patch
  against master branch at commit 7bfbb6a3c9af4b0e2853b5ea2580a05bb471211b.
  ATTACHMENT ID: 12776468

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16808//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16808//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16808//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16808//console

This message is automatically generated.

> IllegalArgumentException was thrown when doing online configuration change in 
> CompactSplitThread
> 
>
> Key: HBASE-14954
> URL: https://issues.apache.org/jira/browse/HBASE-14954
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, regionserver
>Affects Versions: 1.1.2
>Reporter: Victor Xu
>Assignee: Victor Xu
> Attachments: HBASE-14954-v1.patch
>
>
> Online configuration change is a terrific feature for HBase administrators. 
> However, when we use this feature to tune compaction thread pool size online, 
> it triggered a IllegalArgumentException. The cause is the order of 
> setMaximumPoolSize() and setCorePoolSize() of ThreadPoolExecutor: when 
> turning parameters bigger, we should setMax first; when turning parameters 
> smaller, we need to setCore first. Besides, there is also a copy-code bug in 
> merge and split thread pool which I will fix together.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14895) Seek only to the newly flushed file on scanner reset on flush

2015-12-09 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14895:
---
Status: Open  (was: Patch Available)

> Seek only to the newly flushed file on scanner reset on flush
> -
>
> Key: HBASE-14895
> URL: https://issues.apache.org/jira/browse/HBASE-14895
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-14895.patch, HBASE-14895_1.patch, 
> HBASE-14895_1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14936) CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()

2015-12-09 Thread Jianwei Cui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048641#comment-15048641
 ] 

Jianwei Cui commented on HBASE-14936:
-

Sorry to reply late. It seems CombinedBlockCache should also overwrite some 
other methods, such as: getHitRatio(), getSumHitCountsPastNPeriods() etc? I 
will update the patch to include these overwrites and test cases.

> CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()
> --
>
> Key: HBASE-14936
> URL: https://issues.apache.org/jira/browse/HBASE-14936
> Project: HBase
>  Issue Type: Bug
>  Components: BlockCache
>Affects Versions: 1.1.2
>Reporter: Jianwei Cui
> Attachments: HBASE-14936-trunk.patch
>
>
> It seems CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod() as
> {code}
> public void rollMetricsPeriod() {
>   lruCacheStats.rollMetricsPeriod();
>   bucketCacheStats.rollMetricsPeriod();
> }
> {code}
> otherwise, CombinedBlockCache.getHitRatioPastNPeriods() and 
> CombinedBlockCache.getHitCachingRatioPastNPeriods() will always return 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-14957) issue in starting start-hbase.sh

2015-12-09 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi resolved HBASE-14957.
---
   Resolution: Invalid
Fix Version/s: (was: 0.94.9)

> issue in starting start-hbase.sh
> 
>
> Key: HBASE-14957
> URL: https://issues.apache.org/jira/browse/HBASE-14957
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.94.9
>Reporter: gaurav kandpal
>Priority: Blocker
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> for initializing nutch, after configuring nutch whent I am trying to kickoff 
> start-hbase.sh I am getting the below error.
> /cygdrive/c/users/gaurav.kandpal/desktop/test/hbase-0.94.9.tar/hbase-0.94.9/hbase-0.94.9/bin/../conf/hbase-env.sh:
>  line 101: $'\r': command not found
> /cygdrive/c/users/gaurav.kandpal/desktop/test/hbase-0.94.9.tar/hbase-0.94.9/hbase-0.94.9/bin/../conf/hbase-env.sh:
>  line 104: $'\r': command not found
> /cygdrive/c/users/gaurav.kandpal/desktop/test/hbase-0.94.9.tar/hbase-0.94.9/hbase-0.94.9/bin/../conf/hbase-env.sh:
>  line 107: $'\r': command not found
> /cygdrive/c/users/gaurav.kandpal/desktop/test/hbase-0.94.9.tar/hbase-0.94.9/hbase-0.94.9/bin/../conf/hbase-env.sh:
>  line 110: $'\r': command not found
> /cygdrive/c/users/gaurav.kandpal/desktop/test/hbase-0.94.9.tar/hbase-0.94.9/hbase-0.94.9/bin/../conf/hbase-env.sh:
>  line 115: $'\r': command not found
> Error: Could not create the Java Virtual Machine.
> Error: A fatal exception has occurred. Program will exit.
> 'nrecognized VM option 'UseConcMarkSweepGC
> Error: Could not create the Java Virtual Machine.
> Error: A fatal exception has occurred. Program will exit.
> 'nrecognized VM option 'UseConcMarkSweepGC
> starting master, logging to 
> /cygdrive/c/users/gaurav.kandpal/desktop/test/hbase-0.94.9.tar/hbase-0.94.9/hbase-0.94.9/bin/../logs/hbase-Gaurav.Kandpal-master-gauravk.out
> localhost: 
> /cygdrive/c/users/gaurav.kandpal/desktop/test/hbase-0.94.9.tar/hbase-0.94.9/hbase-0.94.9/bin/regionservers.sh:
>  line 64: ssh: command not found
> reference URL is 
> https://gist.github.com/xrstf/b48a970098a8e76943b9
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14957) issue in starting start-hbase.sh

2015-12-09 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048673#comment-15048673
 ] 

Ashish Singhi commented on HBASE-14957:
---

Looks like you are running on windows. 0.94.9 is very old release not sure if 
0.94 version is supported on windows.

This is for the project development tracking. For user or troubleshooting help, 
you can please write to {{u...@hbase.apache.org}}

> issue in starting start-hbase.sh
> 
>
> Key: HBASE-14957
> URL: https://issues.apache.org/jira/browse/HBASE-14957
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.94.9
>Reporter: gaurav kandpal
>Priority: Blocker
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> for initializing nutch, after configuring nutch whent I am trying to kickoff 
> start-hbase.sh I am getting the below error.
> /cygdrive/c/users/gaurav.kandpal/desktop/test/hbase-0.94.9.tar/hbase-0.94.9/hbase-0.94.9/bin/../conf/hbase-env.sh:
>  line 101: $'\r': command not found
> /cygdrive/c/users/gaurav.kandpal/desktop/test/hbase-0.94.9.tar/hbase-0.94.9/hbase-0.94.9/bin/../conf/hbase-env.sh:
>  line 104: $'\r': command not found
> /cygdrive/c/users/gaurav.kandpal/desktop/test/hbase-0.94.9.tar/hbase-0.94.9/hbase-0.94.9/bin/../conf/hbase-env.sh:
>  line 107: $'\r': command not found
> /cygdrive/c/users/gaurav.kandpal/desktop/test/hbase-0.94.9.tar/hbase-0.94.9/hbase-0.94.9/bin/../conf/hbase-env.sh:
>  line 110: $'\r': command not found
> /cygdrive/c/users/gaurav.kandpal/desktop/test/hbase-0.94.9.tar/hbase-0.94.9/hbase-0.94.9/bin/../conf/hbase-env.sh:
>  line 115: $'\r': command not found
> Error: Could not create the Java Virtual Machine.
> Error: A fatal exception has occurred. Program will exit.
> 'nrecognized VM option 'UseConcMarkSweepGC
> Error: Could not create the Java Virtual Machine.
> Error: A fatal exception has occurred. Program will exit.
> 'nrecognized VM option 'UseConcMarkSweepGC
> starting master, logging to 
> /cygdrive/c/users/gaurav.kandpal/desktop/test/hbase-0.94.9.tar/hbase-0.94.9/hbase-0.94.9/bin/../logs/hbase-Gaurav.Kandpal-master-gauravk.out
> localhost: 
> /cygdrive/c/users/gaurav.kandpal/desktop/test/hbase-0.94.9.tar/hbase-0.94.9/hbase-0.94.9/bin/regionservers.sh:
>  line 64: ssh: command not found
> reference URL is 
> https://gist.github.com/xrstf/b48a970098a8e76943b9
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14954) IllegalArgumentException was thrown when doing online configuration change in CompactSplitThread

2015-12-09 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14954:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.1.3
   1.3.0
   1.2.0
   2.0.0
   Status: Resolved  (was: Patch Available)

> IllegalArgumentException was thrown when doing online configuration change in 
> CompactSplitThread
> 
>
> Key: HBASE-14954
> URL: https://issues.apache.org/jira/browse/HBASE-14954
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, regionserver
>Affects Versions: 1.1.2
>Reporter: Victor Xu
>Assignee: Victor Xu
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-14954-v1.patch
>
>
> Online configuration change is a terrific feature for HBase administrators. 
> However, when we use this feature to tune compaction thread pool size online, 
> it triggered a IllegalArgumentException. The cause is the order of 
> setMaximumPoolSize() and setCorePoolSize() of ThreadPoolExecutor: when 
> turning parameters bigger, we should setMax first; when turning parameters 
> smaller, we need to setCore first. Besides, there is also a copy-code bug in 
> merge and split thread pool which I will fix together.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14954) IllegalArgumentException was thrown when doing online configuration change in CompactSplitThread

2015-12-09 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048817#comment-15048817
 ] 

Ted Yu commented on HBASE-14954:


Thanks for the patch, Victor

> IllegalArgumentException was thrown when doing online configuration change in 
> CompactSplitThread
> 
>
> Key: HBASE-14954
> URL: https://issues.apache.org/jira/browse/HBASE-14954
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, regionserver
>Affects Versions: 1.1.2
>Reporter: Victor Xu
>Assignee: Victor Xu
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-14954-v1.patch
>
>
> Online configuration change is a terrific feature for HBase administrators. 
> However, when we use this feature to tune compaction thread pool size online, 
> it triggered a IllegalArgumentException. The cause is the order of 
> setMaximumPoolSize() and setCorePoolSize() of ThreadPoolExecutor: when 
> turning parameters bigger, we should setMax first; when turning parameters 
> smaller, we need to setCore first. Besides, there is also a copy-code bug in 
> merge and split thread pool which I will fix together.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14895) Seek only to the newly flushed file on scanner reset on flush

2015-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1504#comment-1504
 ] 

Hadoop QA commented on HBASE-14895:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12776530/HBASE-14895_2.patch
  against master branch at commit 7bfbb6a3c9af4b0e2853b5ea2580a05bb471211b.
  ATTACHMENT ID: 12776530

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 12 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
new checkstyle errors. Check build console for list of new errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16809//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16809//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16809//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16809//console

This message is automatically generated.

> Seek only to the newly flushed file on scanner reset on flush
> -
>
> Key: HBASE-14895
> URL: https://issues.apache.org/jira/browse/HBASE-14895
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-14895.patch, HBASE-14895_1.patch, 
> HBASE-14895_1.patch, HBASE-14895_2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14958) regionserver.HRegionServer: Master passed us a different hostname to use; was=n04docker2, but now=192.168.3.114

2015-12-09 Thread Yong Zheng (JIRA)
Yong Zheng created HBASE-14958:
--

 Summary: regionserver.HRegionServer: Master passed us a different 
hostname to use; was=n04docker2, but now=192.168.3.114
 Key: HBASE-14958
 URL: https://issues.apache.org/jira/browse/HBASE-14958
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.2
 Environment: physical machines: redhat7.1
docker version: 1.9.1
Reporter: Yong Zheng


I have two physical machines: c3m3n03docker and c3m3n04docker.
I started two docker instances per physical node. the topology is like:

n03docker1(172.17.1.2)  -\
  | br0(172.17.1.1)  +  c3m3n03
n03docker2(172.17.1.3) -/


n04docker1(172.17.2.2)  -\
  | br0(172.17.2.1)  +  c3m3n04
n04docker2(172.17.2.3) -/

for physical machines, c3m3n03 is bundled with physical adapter enp11s0f0 with 
IP (192.168.3.113/16); c3m3n04 is bundled with physical adapter enp11s0f0 with 
IP(192.168.3.114/16). these two physical adapters are connecting to the same 
switch.

Note: br0 is not bundled to physical adapter enp11s0f0  on both nodes. so, all 
requests in 172.17.2.x will be source NAT as 192.168.3.114(c3m3n04) and 
forwarded to c3m3n03.

n03docker1: hbase(1.1.2) master
n03docker2: region server
n04docker1: region server
n04docker2: region server

I first start the n03docker1 and n03docker2, it works; after that, I start 
n04docker1 and it will reported:

2015-12-09 08:01:58,259 ERROR 
[regionserver/n04docker2.gpfs.net/172.17.2.3:16020] regionserver.HRegionServer: 
Master passed us a different hostname to use; was=n04docker2.gpfs.net, but 
now=192.168.3.114

on the master logs:
2015-12-09 08:11:12,234 INFO  [PriorityRpcServer.handler=0,queue=0,port=16000] 
master.ServerManager: Registering server=192.168.3.114,16020,144970721

So, you see, when hbase master receives the requests from n04docker1, all these 
requests are source NATed with 192.168.3.114(not 172.17.2.2).  and hbase master 
passes 192.168.3.114 back to 172.17.2.2(n04docker1). Thus, 
n04docker1(172.17.2.2) reported exceptions in logs.

hbase doesn't support running in virtualization cluster? because SNAT is widely 
used in virtualization. if hbase master get remote hostname/ip(thus get 
192.168.3.114) and pass it back to region server, it will hit this issues.

HBASE-8667 doesn't fix this issue because the fix has been hbase 0.98(I'm 
taking hbase 1.1.2).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14958) regionserver.HRegionServer: Master passed us a different hostname to use; was=n04docker2, but now=192.168.3.114

2015-12-09 Thread Yong Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yong Zheng updated HBASE-14958:
---
Description: 
I have two physical machines: c3m3n03docker and c3m3n04docker.
I started two docker instances per physical node. the topology is like:

n03docker1(172.17.1.2)  -\
  | br0(172.17.1.1)  +  c3m3n03
n03docker2(172.17.1.3) -/


n04docker1(172.17.2.2)  -\
  | br0(172.17.2.1)  +  c3m3n04
n04docker2(172.17.2.3) -/

for physical machines, c3m3n03 is bundled with physical adapter enp11s0f0 with 
IP (192.168.3.113/16); c3m3n04 is bundled with physical adapter enp11s0f0 with 
IP(192.168.3.114/16). these two physical adapters are connecting to the same 
switch.

Note: br0 is not bundled to physical adapter enp11s0f0  on both nodes. so, all 
requests in 172.17.2.x will be source NAT as 192.168.3.114(c3m3n04) and 
forwarded to c3m3n03.

n03docker1: hbase(1.1.2) master
n03docker2: region server
n04docker1: region server
n04docker2: region server

I first start the n03docker1 and n03docker2, it works; after that, I start 
n04docker2 and it will reported:

2015-12-09 08:01:58,259 ERROR 
[regionserver/n04docker2.gpfs.net/172.17.2.3:16020] regionserver.HRegionServer: 
Master passed us a different hostname to use; was=n04docker2.gpfs.net, but 
now=192.168.3.114

on the master logs:
2015-12-09 08:11:12,234 INFO  [PriorityRpcServer.handler=0,queue=0,port=16000] 
master.ServerManager: Registering server=192.168.3.114,16020,144970721

So, you see, when hbase master receives the requests from n04docker2, all these 
requests are source NATed with 192.168.3.114(not 172.17.2.3).  and hbase master 
passes 192.168.3.114 back to 172.17.2.3(n04docker2). Thus, 
n04docker1(172.17.2.3) reported exceptions in logs.

hbase doesn't support running in virtualization cluster? because SNAT is widely 
used in virtualization. if hbase master get remote hostname/ip(thus get 
192.168.3.114) and pass it back to region server, it will hit this issues.

HBASE-8667 doesn't fix this issue because the fix has been hbase 0.98(I'm 
taking hbase 1.1.2).

  was:
I have two physical machines: c3m3n03docker and c3m3n04docker.
I started two docker instances per physical node. the topology is like:

n03docker1(172.17.1.2)  -\
  | br0(172.17.1.1)  +  c3m3n03
n03docker2(172.17.1.3) -/


n04docker1(172.17.2.2)  -\
  | br0(172.17.2.1)  +  c3m3n04
n04docker2(172.17.2.3) -/

for physical machines, c3m3n03 is bundled with physical adapter enp11s0f0 with 
IP (192.168.3.113/16); c3m3n04 is bundled with physical adapter enp11s0f0 with 
IP(192.168.3.114/16). these two physical adapters are connecting to the same 
switch.

Note: br0 is not bundled to physical adapter enp11s0f0  on both nodes. so, all 
requests in 172.17.2.x will be source NAT as 192.168.3.114(c3m3n04) and 
forwarded to c3m3n03.

n03docker1: hbase(1.1.2) master
n03docker2: region server
n04docker1: region server
n04docker2: region server

I first start the n03docker1 and n03docker2, it works; after that, I start 
n04docker1 and it will reported:

2015-12-09 08:01:58,259 ERROR 
[regionserver/n04docker2.gpfs.net/172.17.2.3:16020] regionserver.HRegionServer: 
Master passed us a different hostname to use; was=n04docker2.gpfs.net, but 
now=192.168.3.114

on the master logs:
2015-12-09 08:11:12,234 INFO  [PriorityRpcServer.handler=0,queue=0,port=16000] 
master.ServerManager: Registering server=192.168.3.114,16020,144970721

So, you see, when hbase master receives the requests from n04docker1, all these 
requests are source NATed with 192.168.3.114(not 172.17.2.2).  and hbase master 
passes 192.168.3.114 back to 172.17.2.2(n04docker1). Thus, 
n04docker1(172.17.2.2) reported exceptions in logs.

hbase doesn't support running in virtualization cluster? because SNAT is widely 
used in virtualization. if hbase master get remote hostname/ip(thus get 
192.168.3.114) and pass it back to region server, it will hit this issues.

HBASE-8667 doesn't fix this issue because the fix has been hbase 0.98(I'm 
taking hbase 1.1.2).


> regionserver.HRegionServer: Master passed us a different hostname to use; 
> was=n04docker2, but now=192.168.3.114
> ---
>
> Key: HBASE-14958
> URL: https://issues.apache.org/jira/browse/HBASE-14958
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
> Environment: physical machines: redhat7.1
> docker version: 1.9.1
>Reporter: Yong Zheng
>
> I have two physical machines: c3m3n03docker and c3m3n04docker.
> I started two docker instances per physical node. the topology is like:
> n03docker1(172.17.1.2)  -\
> 

[jira] [Commented] (HBASE-14954) IllegalArgumentException was thrown when doing online configuration change in CompactSplitThread

2015-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048844#comment-15048844
 ] 

Hudson commented on HBASE-14954:


SUCCESS: Integrated in HBase-1.2-IT #331 (See 
[https://builds.apache.org/job/HBase-1.2-IT/331/])
HBASE-14954 IllegalArgumentException was thrown when doing online (tedyu: rev 
f4fec4cd13043c8d50e2bd063903269382a6306f)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactSplitThread.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java


> IllegalArgumentException was thrown when doing online configuration change in 
> CompactSplitThread
> 
>
> Key: HBASE-14954
> URL: https://issues.apache.org/jira/browse/HBASE-14954
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, regionserver
>Affects Versions: 1.1.2
>Reporter: Victor Xu
>Assignee: Victor Xu
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-14954-v1.patch
>
>
> Online configuration change is a terrific feature for HBase administrators. 
> However, when we use this feature to tune compaction thread pool size online, 
> it triggered a IllegalArgumentException. The cause is the order of 
> setMaximumPoolSize() and setCorePoolSize() of ThreadPoolExecutor: when 
> turning parameters bigger, we should setMax first; when turning parameters 
> smaller, we need to setCore first. Besides, there is also a copy-code bug in 
> merge and split thread pool which I will fix together.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14919) Infrastructure refactoring

2015-12-09 Thread Eshcar Hillel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048897#comment-15048897
 ] 

Eshcar Hillel commented on HBASE-14919:
---

The patch is now available in rb.
I will work to fix the style warnings.
This would be a good time to give feedback :).

> Infrastructure refactoring
> --
>
> Key: HBASE-14919
> URL: https://issues.apache.org/jira/browse/HBASE-14919
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Attachments: HBASE-14919-V01.patch, HBASE-14919-V01.patch, 
> HBASE-14919-V02.patch
>
>
> Refactoring the MemStore hierarchy, introducing segment (StoreSegment) as 
> first-class citizen and decoupling memstore scanner from the memstore 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13153) Bulk Loaded HFile Replication

2015-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048876#comment-15048876
 ] 

Hadoop QA commented on HBASE-13153:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12776537/HBASE-13153-v19.patch
  against master branch at commit 7bfbb6a3c9af4b0e2853b5ea2580a05bb471211b.
  ATTACHMENT ID: 12776537

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 42 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
new checkstyle errors. Check build console for list of new errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16810//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16810//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16810//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16810//console

This message is automatically generated.

> Bulk Loaded HFile Replication
> -
>
> Key: HBASE-13153
> URL: https://issues.apache.org/jira/browse/HBASE-13153
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: sunhaitao
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-13153-branch-1-v18.patch, HBASE-13153-v1.patch, 
> HBASE-13153-v10.patch, HBASE-13153-v11.patch, HBASE-13153-v12.patch, 
> HBASE-13153-v13.patch, HBASE-13153-v14.patch, HBASE-13153-v15.patch, 
> HBASE-13153-v16.patch, HBASE-13153-v17.patch, HBASE-13153-v18.patch, 
> HBASE-13153-v19.patch, HBASE-13153-v2.patch, HBASE-13153-v3.patch, 
> HBASE-13153-v4.patch, HBASE-13153-v5.patch, HBASE-13153-v6.patch, 
> HBASE-13153-v7.patch, HBASE-13153-v8.patch, HBASE-13153-v9.patch, 
> HBASE-13153.patch, HBase Bulk Load Replication-v1-1.pdf, HBase Bulk Load 
> Replication-v2.pdf, HBase Bulk Load Replication-v3.pdf, HBase Bulk Load 
> Replication.pdf, HDFS_HA_Solution.PNG
>
>
> Currently we plan to use HBase Replication feature to deal with disaster 
> tolerance scenario.But we encounter an issue that we will use bulkload very 
> frequently,because bulkload bypass write path, and will not generate WAL, so 
> the data will not be replicated to backup cluster. It's inappropriate to 
> bukload twice both on active cluster and backup cluster. So i advise do some 
> modification to bulkload feature to enable bukload to both active cluster and 
> backup cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13153) Bulk Loaded HFile Replication

2015-12-09 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048898#comment-15048898
 ] 

Ted Yu commented on HBASE-13153:


Please address checkstyle warnings.


> Bulk Loaded HFile Replication
> -
>
> Key: HBASE-13153
> URL: https://issues.apache.org/jira/browse/HBASE-13153
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: sunhaitao
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-13153-branch-1-v18.patch, HBASE-13153-v1.patch, 
> HBASE-13153-v10.patch, HBASE-13153-v11.patch, HBASE-13153-v12.patch, 
> HBASE-13153-v13.patch, HBASE-13153-v14.patch, HBASE-13153-v15.patch, 
> HBASE-13153-v16.patch, HBASE-13153-v17.patch, HBASE-13153-v18.patch, 
> HBASE-13153-v19.patch, HBASE-13153-v2.patch, HBASE-13153-v3.patch, 
> HBASE-13153-v4.patch, HBASE-13153-v5.patch, HBASE-13153-v6.patch, 
> HBASE-13153-v7.patch, HBASE-13153-v8.patch, HBASE-13153-v9.patch, 
> HBASE-13153.patch, HBase Bulk Load Replication-v1-1.pdf, HBase Bulk Load 
> Replication-v2.pdf, HBase Bulk Load Replication-v3.pdf, HBase Bulk Load 
> Replication.pdf, HDFS_HA_Solution.PNG
>
>
> Currently we plan to use HBase Replication feature to deal with disaster 
> tolerance scenario.But we encounter an issue that we will use bulkload very 
> frequently,because bulkload bypass write path, and will not generate WAL, so 
> the data will not be replicated to backup cluster. It's inappropriate to 
> bukload twice both on active cluster and backup cluster. So i advise do some 
> modification to bulkload feature to enable bukload to both active cluster and 
> backup cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14425) In Secure Zookeeper cluster superuser will not have sufficient permission if multiple values are configured in "hbase.superuser"

2015-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049015#comment-15049015
 ] 

Hudson commented on HBASE-14425:


SUCCESS: Integrated in HBase-0.98-matrix #271 (See 
[https://builds.apache.org/job/HBase-0.98-matrix/271/])
HBASE-14425 In Secure Zookeeper cluster superuser will not have (apurtell: rev 
33ecfc3b59f96d691186517b1ab6d8cf548360a3)
* hbase-client/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKUtil.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperWatcher.java
* 
hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestZKAndFSPermissions.java


> In Secure Zookeeper cluster superuser will not have sufficient permission if 
> multiple values are configured in "hbase.superuser"
> 
>
> Key: HBASE-14425
> URL: https://issues.apache.org/jira/browse/HBASE-14425
> Project: HBase
>  Issue Type: Bug
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14425-V2.patch, HBASE-14425-V2.patch, 
> HBASE-14425.patch
>
>
> During master intialization we are setting ACLs for the znodes.
> In ZKUtil.createACL(ZooKeeperWatcher zkw, String node, boolean 
> isSecureZooKeeper),
> {code}
>   String superUser = zkw.getConfiguration().get("hbase.superuser");
>   ArrayList acls = new ArrayList();
>   // add permission to hbase supper user
>   if (superUser != null) {
> acls.add(new ACL(Perms.ALL, new Id("auth", superUser)));
>   }
> {code}
> Here we are directly setting "hbase.superuser" value to Znode which will 
> cause an issue when multiple values are configured. In "hbase.superuser" 
> multiple superusers and supergroups can be configured separated by comma. We 
> need to iterate them and set ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14915) Hanging test : org.apache.hadoop.hbase.mapreduce.TestImportExport

2015-12-09 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049021#comment-15049021
 ] 

stack commented on HBASE-14915:
---

This morning again...

kalashnikov:hbase.git.commit stack$ python ./dev-support/findHangingTests.py 
https://builds.apache.org/view/H-L/view/HBase/job/HBase-1.2/431/jdk=latest1.7,label=Hadoop/consoleText
Fetching 
https://builds.apache.org/view/H-L/view/HBase/job/HBase-1.2/431/jdk=latest1.7,label=Hadoop/consoleText
Building remotely on H4 (Mapreduce zookeeper Hadoop Pig falcon Hdfs) in 
workspace 
/home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop
Printing hanging tests
Hanging test : org.apache.hadoop.hbase.mapreduce.TestImportExport

> Hanging test : org.apache.hadoop.hbase.mapreduce.TestImportExport
> -
>
> Key: HBASE-14915
> URL: https://issues.apache.org/jira/browse/HBASE-14915
> Project: HBase
>  Issue Type: Sub-task
>  Components: hangingTests
>Reporter: stack
> Attachments: HBASE-14915-branch-1.2.patch
>
>
> This test hangs a bunch:
> Here is latest:
> https://builds.apache.org/job/HBase-1.2/418/jdk=latest1.7,label=Hadoop/consoleText



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14869) Better request latency and size histograms

2015-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049041#comment-15049041
 ] 

Hudson commented on HBASE-14869:


FAILURE: Integrated in HBase-1.0 #1121 (See 
[https://builds.apache.org/job/HBase-1.0/1121/])
HBASE-14869 Better request latency and size histograms. (Vikas (larsh: rev 
fd55483bcf8f9121228f0c7f34ec5b5f062a6723)
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/MetricsAssignmentManagerSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableTimeHistogram.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/wal/MetricsWALSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableRangeHistogram.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/balancer/MetricsBalancerSourceImpl.java
* 
hbase-hadoop-compat/src/test/java/org/apache/hadoop/hbase/test/MetricsAssertHelper.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/wal/MetricsEditsReplaySourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/thrift/MetricsThriftServerSourceImpl.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerMetrics.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/MetricsSnapshotSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableSizeHistogram.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/MetricsMasterFilesystemSourceImpl.java
* 
hbase-hadoop2-compat/src/test/java/org/apache/hadoop/hbase/test/MetricsAssertHelperImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/DynamicMetricsRegistry.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableHistogram.java


> Better request latency and size histograms
> --
>
> Key: HBASE-14869
> URL: https://issues.apache.org/jira/browse/HBASE-14869
> Project: HBase
>  Issue Type: Brainstorming
>Reporter: Lars Hofhansl
>Assignee: Vikas Vishwakarma
> Fix For: 2.0.0, 1.3.0, 0.98.17
>
> Attachments: 14869-test-0.98.txt, 14869-v1-0.98.txt, 
> 14869-v1-2.0.txt, 14869-v2-0.98.txt, 14869-v2-2.0.txt, 14869-v3-0.98.txt, 
> 14869-v4-0.98.txt, 14869-v5-0.98.txt, 14869-v6-0.98.txt, AppendSizeTime.png, 
> Get.png
>
>
> I just discussed this with a colleague.
> The get, put, etc, histograms that each region server keeps are somewhat 
> useless (depending on what you want to achieve of course), as they are 
> aggregated and calculated by each region server.
> It would be better to record the number of requests in certainly latency 
> bands in addition to what we do now.
> For example the number of gets that took 0-5ms, 6-10ms, 10-20ms, 20-50ms, 
> 50-100ms, 100-1000ms, > 1000ms, etc. (just as an example, should be 
> configurable).
> That way we can do further calculations after the fact, and answer questions 
> like: How often did we miss our SLA? Percentage of requests that missed an 
> SLA, etc.
> Comments?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14004) [Replication] Inconsistency between Memstore and WAL may result in data in remote cluster that is not in the origin

2015-12-09 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049048#comment-15049048
 ] 

Phil Yang commented on HBASE-14004:
---

The reason I asked this question is if we don't have to make sure we must 
replay wal in order and only once for each log, it may be easier to resolve 
HBASE-14949 which is being fixed by Heng

> [Replication] Inconsistency between Memstore and WAL may result in data in 
> remote cluster that is not in the origin
> ---
>
> Key: HBASE-14004
> URL: https://issues.apache.org/jira/browse/HBASE-14004
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: He Liangliang
>Priority: Critical
>  Labels: replication, wal
>
> Looks like the current write path can cause inconsistency between 
> memstore/hfile and WAL which cause the slave cluster has more data than the 
> master cluster.
> The simplified write path looks like:
> 1. insert record into Memstore
> 2. write record to WAL
> 3. sync WAL
> 4. rollback Memstore if 3 fails
> It's possible that the HDFS sync RPC call fails, but the data is already  
> (may partially) transported to the DNs which finally get persisted. As a 
> result, the handler will rollback the Memstore and the later flushed HFile 
> will also skip this record.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14477) Compaction improvements: Date tiered compaction policy

2015-12-09 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049082#comment-15049082
 ] 

Anoop Sam John commented on HBASE-14477:


We wanted to work on a project after this policy is in.. That is why pinged :-) 
  Also can give you a helping hand if u want/like..


> Compaction improvements: Date tiered compaction policy
> --
>
> Key: HBASE-14477
> URL: https://issues.apache.org/jira/browse/HBASE-14477
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
>
> For immutable and mostly immutable data the current SizeTiered-based 
> compaction policy is not efficient. 
> # There is no need to compact all files into one, because, data is (mostly) 
> immutable and we do not need to collect garbage. (performance reason will be 
> discussed later)
> # Size-tiered compaction is not suitable for applications where most recent 
> data is most important and prevents efficient caching of this data. 
> The idea  is pretty similar to DateTieredCompaction in Cassandra:
> http://www.datastax.com/dev/blog/datetieredcompactionstrategy
> http://www.datastax.com/dev/blog/dtcs-notes-from-the-field
> From Cassandra own blog:
> {quote}
> Since DTCS can be used with any table, it is important to know when it is a 
> good idea, and when it is not. I’ll try to explain the spectrum and 
> trade-offs here:
> 1. Perfect Fit: Time Series Fact Data, Deletes by Default TTL: When you 
> ingest fact data that is ordered in time, with no deletes or overwrites. This 
> is the standard “time series” use case.
> 2. OK Fit: Time-Ordered, with limited updates across whole data set, or only 
> updates to recent data: When you ingest data that is (mostly) ordered in 
> time, but revise or delete a very small proportion of the overall data across 
> the whole timeline.
> 3. Not a Good Fit: many partial row updates or deletions over time: When you 
> need to partially revise or delete fields for rows that you read together. 
> Also, when you revise or delete rows within clustered reads.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13153) Bulk Loaded HFile Replication

2015-12-09 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-13153:
--
Attachment: (was: HBASE-13153-branch-1-v18.patch)

> Bulk Loaded HFile Replication
> -
>
> Key: HBASE-13153
> URL: https://issues.apache.org/jira/browse/HBASE-13153
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: sunhaitao
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-13153-branch-1-v20.patch, HBASE-13153-v1.patch, 
> HBASE-13153-v10.patch, HBASE-13153-v11.patch, HBASE-13153-v12.patch, 
> HBASE-13153-v13.patch, HBASE-13153-v14.patch, HBASE-13153-v15.patch, 
> HBASE-13153-v16.patch, HBASE-13153-v17.patch, HBASE-13153-v18.patch, 
> HBASE-13153-v19.patch, HBASE-13153-v2.patch, HBASE-13153-v20.patch, 
> HBASE-13153-v3.patch, HBASE-13153-v4.patch, HBASE-13153-v5.patch, 
> HBASE-13153-v6.patch, HBASE-13153-v7.patch, HBASE-13153-v8.patch, 
> HBASE-13153-v9.patch, HBASE-13153.patch, HBase Bulk Load 
> Replication-v1-1.pdf, HBase Bulk Load Replication-v2.pdf, HBase Bulk Load 
> Replication-v3.pdf, HBase Bulk Load Replication.pdf, HDFS_HA_Solution.PNG
>
>
> Currently we plan to use HBase Replication feature to deal with disaster 
> tolerance scenario.But we encounter an issue that we will use bulkload very 
> frequently,because bulkload bypass write path, and will not generate WAL, so 
> the data will not be replicated to backup cluster. It's inappropriate to 
> bukload twice both on active cluster and backup cluster. So i advise do some 
> modification to bulkload feature to enable bukload to both active cluster and 
> backup cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13153) Bulk Loaded HFile Replication

2015-12-09 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-13153:
--
Attachment: HBASE-13153-branch-1-v20.patch

Patch for branch-1

> Bulk Loaded HFile Replication
> -
>
> Key: HBASE-13153
> URL: https://issues.apache.org/jira/browse/HBASE-13153
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: sunhaitao
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-13153-branch-1-v20.patch, HBASE-13153-v1.patch, 
> HBASE-13153-v10.patch, HBASE-13153-v11.patch, HBASE-13153-v12.patch, 
> HBASE-13153-v13.patch, HBASE-13153-v14.patch, HBASE-13153-v15.patch, 
> HBASE-13153-v16.patch, HBASE-13153-v17.patch, HBASE-13153-v18.patch, 
> HBASE-13153-v19.patch, HBASE-13153-v2.patch, HBASE-13153-v20.patch, 
> HBASE-13153-v3.patch, HBASE-13153-v4.patch, HBASE-13153-v5.patch, 
> HBASE-13153-v6.patch, HBASE-13153-v7.patch, HBASE-13153-v8.patch, 
> HBASE-13153-v9.patch, HBASE-13153.patch, HBase Bulk Load 
> Replication-v1-1.pdf, HBase Bulk Load Replication-v2.pdf, HBase Bulk Load 
> Replication-v3.pdf, HBase Bulk Load Replication.pdf, HDFS_HA_Solution.PNG
>
>
> Currently we plan to use HBase Replication feature to deal with disaster 
> tolerance scenario.But we encounter an issue that we will use bulkload very 
> frequently,because bulkload bypass write path, and will not generate WAL, so 
> the data will not be replicated to backup cluster. It's inappropriate to 
> bukload twice both on active cluster and backup cluster. So i advise do some 
> modification to bulkload feature to enable bukload to both active cluster and 
> backup cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14937) Make rpc call timeout for replication adaptive

2015-12-09 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048997#comment-15048997
 ] 

Ashish Singhi commented on HBASE-14937:
---

To solve this problem client can simply increase the timeout value for 
{{hbase.rpc.timeout}} as per their requirement (by default it is 1 minute) but 
this will apply to all the RPC requests so rather than doing this we can make 
it adaptive by adding another configuration {{hbase.replication.rpc.timeout}} 
with default value as {{hbase.rpc.timeout}} and set this as call timeout value 
to the rpc request and on every {{CallTimeOutException}} we can increase this 
value with some multiplier for some configurable number of times and set this 
timeout value for the next retry of replication request.

Any other thoughts ?

> Make rpc call timeout for replication adaptive
> --
>
> Key: HBASE-14937
> URL: https://issues.apache.org/jira/browse/HBASE-14937
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>  Labels: replication
>
> When peer cluster replication is disabled and lot of writes are happening in 
> active cluster and later on peer cluster replication is enabled then there 
> are chances that replication requests to peer cluster may time out.
> This is possible after HBASE-13153 and it can also happen with many and many 
> WAL data replication still pending to replicate.
> Approach to this problem will be discussed in the comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14915) Hanging test : org.apache.hadoop.hbase.mapreduce.TestImportExport

2015-12-09 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049012#comment-15049012
 ] 

stack commented on HBASE-14915:
---

Yes, I need to look more, but I think we are stuck.. unable to flush Let me 
look... (Not sure how it is going zombie though...)

> Hanging test : org.apache.hadoop.hbase.mapreduce.TestImportExport
> -
>
> Key: HBASE-14915
> URL: https://issues.apache.org/jira/browse/HBASE-14915
> Project: HBase
>  Issue Type: Sub-task
>  Components: hangingTests
>Reporter: stack
> Attachments: HBASE-14915-branch-1.2.patch
>
>
> This test hangs a bunch:
> Here is latest:
> https://builds.apache.org/job/HBase-1.2/418/jdk=latest1.7,label=Hadoop/consoleText



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14869) Better request latency and size histograms

2015-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049016#comment-15049016
 ] 

Hudson commented on HBASE-14869:


SUCCESS: Integrated in HBase-0.98-matrix #271 (See 
[https://builds.apache.org/job/HBase-0.98-matrix/271/])
HBASE-14869 Better request latency and size histograms. (Vikas (larsh: rev 
69b0c5477c32e60afc82542e7d225fbc081e4942)
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/DynamicMetricsRegistry.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableHistogram.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/wal/MetricsEditsReplaySourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableSizeHistogram.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/MetricsAssignmentManagerSourceImpl.java
* 
hbase-hadoop2-compat/src/test/java/org/apache/hadoop/hbase/test/MetricsAssertHelperImpl.java
* 
hbase-hadoop1-compat/src/main/java/org/apache/hadoop/metrics2/lib/MetricMutableSizeHistogram.java
* 
hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/regionserver/wal/MetricsWALSourceImpl.java
* 
hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/MetricsSnapshotSourceImpl.java
* 
hbase-hadoop-compat/src/test/java/org/apache/hadoop/hbase/test/MetricsAssertHelper.java
* 
hbase-hadoop1-compat/src/main/java/org/apache/hadoop/metrics2/lib/MetricMutableHistogram.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSourceImpl.java
* 
hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceImpl.java
* 
hbase-hadoop1-compat/src/main/java/org/apache/hadoop/metrics2/lib/DynamicMetricsRegistry.java
* 
hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionSourceImpl.java
* 
hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/master/MetricsSnapshotSourceImpl.java
* 
hbase-hadoop1-compat/src/main/java/org/apache/hadoop/metrics2/lib/MetricMutableTimeHistogram.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionSourceImpl.java
* 
hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/regionserver/wal/MetricsEditsReplaySourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/thrift/MetricsThriftServerSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceImpl.java
* 
hbase-hadoop1-compat/src/test/java/org/apache/hadoop/hbase/test/MetricsAssertHelperImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/balancer/MetricsBalancerSourceImpl.java
* 
hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/thrift/MetricsThriftServerSourceImpl.java
* 
hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/master/balancer/MetricsBalancerSourceImpl.java
* 
hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/master/MetricsAssignmentManagerSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/wal/MetricsWALSourceImpl.java
* 
hbase-hadoop1-compat/src/main/java/org/apache/hadoop/metrics2/lib/MetricMutableRangeHistogram.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/MetricsMasterFilesystemSourceImpl.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerMetrics.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableTimeHistogram.java
* 
hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/master/MetricsMasterFilesystemSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableRangeHistogram.java


> Better request latency and size histograms
> --
>
> Key: HBASE-14869
> URL: https://issues.apache.org/jira/browse/HBASE-14869
> Project: HBase
>  Issue Type: Brainstorming
>Reporter: Lars Hofhansl
>Assignee: Vikas Vishwakarma
> Fix For: 2.0.0, 1.3.0, 0.98.17
>
> Attachments: 14869-test-0.98.txt, 14869-v1-0.98.txt, 
> 14869-v1-2.0.txt, 14869-v2-0.98.txt, 14869-v2-2.0.txt, 14869-v3-0.98.txt, 
> 14869-v4-0.98.txt, 14869-v5-0.98.txt, 14869-v6-0.98.txt, AppendSizeTime.png, 
> Get.png
>
>
> I just discussed this with a colleague.
> The get, put, etc, histograms that each region server keeps are somewhat 
> useless (depending on what you want to achieve of course), as they are 
> aggregated and calculated by each region server.
> It would be better to record the number of requests in certainly latency 
> bands in addition to what we do now.
> For example the number of gets that took 0-5ms, 6-10ms, 10-20ms, 20-50ms, 
> 

[jira] [Commented] (HBASE-13796) ZKUtil doesn't clean quorum setting properly

2015-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049017#comment-15049017
 ] 

Hudson commented on HBASE-13796:


SUCCESS: Integrated in HBase-0.98-matrix #271 (See 
[https://builds.apache.org/job/HBase-0.98-matrix/271/])
HBASE-13796 ZKUtil doesn't clean quorum setting properly (apurtell: rev 
22f537d9fa8b9b43c67630b5e048e17c1873c5c1)
* hbase-client/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKUtil.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java


> ZKUtil doesn't clean quorum setting properly
> 
>
> Key: HBASE-13796
> URL: https://issues.apache.org/jira/browse/HBASE-13796
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1, 1.1.0, 0.98.12
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Minor
> Fix For: 2.0.0, 1.2.0, 0.98.17
>
> Attachments: HBASE-13796.patch
>
>
> ZKUtil.getZooKeeperClusterKey is obviously trying to pull out the ZooKeeper 
> quorum setting from the config object and remove several special characters 
> from it. Due to a misplaced parenthesis, however, it's instead running the 
> replace operation on the config setting _name_, HConstants.ZOOKEEPER_QUORUM, 
> and not the config setting itself. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14954) IllegalArgumentException was thrown when doing online configuration change in CompactSplitThread

2015-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049069#comment-15049069
 ] 

Hudson commented on HBASE-14954:


FAILURE: Integrated in HBase-1.2 #432 (See 
[https://builds.apache.org/job/HBase-1.2/432/])
HBASE-14954 IllegalArgumentException was thrown when doing online (tedyu: rev 
f4fec4cd13043c8d50e2bd063903269382a6306f)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactSplitThread.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java


> IllegalArgumentException was thrown when doing online configuration change in 
> CompactSplitThread
> 
>
> Key: HBASE-14954
> URL: https://issues.apache.org/jira/browse/HBASE-14954
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, regionserver
>Affects Versions: 1.1.2
>Reporter: Victor Xu
>Assignee: Victor Xu
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-14954-v1.patch
>
>
> Online configuration change is a terrific feature for HBase administrators. 
> However, when we use this feature to tune compaction thread pool size online, 
> it triggered a IllegalArgumentException. The cause is the order of 
> setMaximumPoolSize() and setCorePoolSize() of ThreadPoolExecutor: when 
> turning parameters bigger, we should setMax first; when turning parameters 
> smaller, we need to setCore first. Besides, there is also a copy-code bug in 
> merge and split thread pool which I will fix together.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14954) IllegalArgumentException was thrown when doing online configuration change in CompactSplitThread

2015-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049143#comment-15049143
 ] 

Hudson commented on HBASE-14954:


FAILURE: Integrated in HBase-1.3 #425 (See 
[https://builds.apache.org/job/HBase-1.3/425/])
HBASE-14954 IllegalArgumentException was thrown when doing online (tedyu: rev 
07e2496ad1735981ba910a873eeb6a50b1461f0d)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactSplitThread.java


> IllegalArgumentException was thrown when doing online configuration change in 
> CompactSplitThread
> 
>
> Key: HBASE-14954
> URL: https://issues.apache.org/jira/browse/HBASE-14954
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, regionserver
>Affects Versions: 1.1.2
>Reporter: Victor Xu
>Assignee: Victor Xu
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-14954-v1.patch
>
>
> Online configuration change is a terrific feature for HBase administrators. 
> However, when we use this feature to tune compaction thread pool size online, 
> it triggered a IllegalArgumentException. The cause is the order of 
> setMaximumPoolSize() and setCorePoolSize() of ThreadPoolExecutor: when 
> turning parameters bigger, we should setMax first; when turning parameters 
> smaller, we need to setCore first. Besides, there is also a copy-code bug in 
> merge and split thread pool which I will fix together.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14951) Make hbase.regionserver.maxlogs obsolete

2015-12-09 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049185#comment-15049185
 ] 

Enis Soztutar commented on HBASE-14951:
---

+1. The checkstyle warning is this:  
{code}
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java 
LeftCurlyCheck  0   1
{code}

I can fix it in commit. [~eclark], [~saint@gmail.com] FYI. 


> Make hbase.regionserver.maxlogs obsolete
> 
>
> Key: HBASE-14951
> URL: https://issues.apache.org/jira/browse/HBASE-14951
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14951-v1.patch, HBASE-14951-v2.patch
>
>
> There was a discussion in HBASE-14388 related to maximum number of log files. 
> It was an agreement that we should calculate this number in a code but still 
> need to honor user's setting. 
> Maximum number of log files now is calculated as following:
>  maxLogs = HEAP_SIZE * memstoreRatio * 2/ LogRollSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14954) IllegalArgumentException was thrown when doing online configuration change in CompactSplitThread

2015-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049187#comment-15049187
 ] 

Hudson commented on HBASE-14954:


SUCCESS: Integrated in HBase-1.3-IT #362 (See 
[https://builds.apache.org/job/HBase-1.3-IT/362/])
HBASE-14954 IllegalArgumentException was thrown when doing online (tedyu: rev 
07e2496ad1735981ba910a873eeb6a50b1461f0d)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactSplitThread.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java


> IllegalArgumentException was thrown when doing online configuration change in 
> CompactSplitThread
> 
>
> Key: HBASE-14954
> URL: https://issues.apache.org/jira/browse/HBASE-14954
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, regionserver
>Affects Versions: 1.1.2
>Reporter: Victor Xu
>Assignee: Victor Xu
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-14954-v1.patch
>
>
> Online configuration change is a terrific feature for HBase administrators. 
> However, when we use this feature to tune compaction thread pool size online, 
> it triggered a IllegalArgumentException. The cause is the order of 
> setMaximumPoolSize() and setCorePoolSize() of ThreadPoolExecutor: when 
> turning parameters bigger, we should setMax first; when turning parameters 
> smaller, we need to setCore first. Besides, there is also a copy-code bug in 
> merge and split thread pool which I will fix together.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14451) Move on to htrace-4.0.1 (from htrace-3.2.0)

2015-12-09 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049340#comment-15049340
 ] 

stack commented on HBASE-14451:
---

Giving up on this for the moment. I am not sending traces so need to do more 
debug. Fixed a bunch of NPEs because there was not tracer in the particular 
context but it doesn't seem like we are generating any spans at the moment post 
redo to fit the htrace-4 semantic.

I was running with htrace DEBUG on and with following config:

{code}
+ 
+ hbase.htrace.htraced.span.receiver.classes
+ org.apache.htrace.impl.HTracedSpanReceiver
+ 
+ The class name of the HTrace SpanReceivers to use inside
+ HBase. If there are no class names supplied here, tracings will not be 
emitted.
+ 
+ 
+ 
+   
+ hbase.htrace.htraced.receiver.address
+   
+   
+ localhost:9075
+   
+ 
+ 
+   
+ hbase.htraced.error.log.period.ms
+   
+   
+ 1000
+   
+ 
+ 
+ hbase.htrace.sampler.classes
+ org.apache.htrace.core.AlwaysSampler
+ Sampler to use when tracing. Default is
+ org.apache.htrace.core.NeverSampler. Other options are
+ org.apache.htrace.core.AlwaysSampler and
+ org.apache.htrace.core.ProbabilitySampler. See htrace-core
+ for options provided by htrace.
+ 
+ 
{code}

Attached is latest patch.

> Move on to htrace-4.0.1 (from htrace-3.2.0)
> ---
>
> Key: HBASE-14451
> URL: https://issues.apache.org/jira/browse/HBASE-14451
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: stack
> Attachments: 14451.txt, 14451.v10.txt, 14451.v10.txt, 14451v11.patch, 
> 14451v2.txt, 14451v3.txt, 14451v4.txt, 14451v5.txt, 14451v6.txt, 14451v7.txt, 
> 14451v8.txt, 14451v9.txt, 14551v12.patch
>
>
> htrace-4.0.0 was just release with a new API. Get up on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14959) Tracer#toString should dump configuration detail

2015-12-09 Thread stack (JIRA)
stack created HBASE-14959:
-

 Summary: Tracer#toString should dump configuration detail
 Key: HBASE-14959
 URL: https://issues.apache.org/jira/browse/HBASE-14959
 Project: HBase
  Issue Type: Bug
Reporter: stack


It should dump out configured tracers, receivers and sampler info. Currently it 
does this:

1 2015-12-09 11:59:07,901 DEBUG [main] regionserver.HRegionServer: Tracer 
created Tracer(Server/172.18.13.155)

Which is not helpful when trying to figure what is currently loaded when 
debugging why tracing is not working.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14468) Compaction improvements: FIFO compaction policy

2015-12-09 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049310#comment-15049310
 ] 

stack commented on HBASE-14468:
---

This test hung just now in a 1.3 build:



kalashnikov:hbase.git stack$ !520
python ./dev-support/findHangingTests.py 
https://builds.apache.org/job/HBase-1.3/jdk=latest1.8,label=Hadoop/425/consoleText
Fetching 
https://builds.apache.org/job/HBase-1.3/jdk=latest1.8,label=Hadoop/425/consoleText
Building remotely on H1 (Mapreduce Hadoop Pig Hdfs) in workspace 
/home/jenkins/jenkins-slave/workspace/HBase-1.3/jdk/latest1.8/label/Hadoop
Printing hanging tests
Hanging test : 
org.apache.hadoop.hbase.regionserver.compactions.TestFIFOCompactionPolicy

It looks like it is stuck waiting on a server to show up.


https://builds.apache.org/job/HBase-1.3/jdk=latest1.8,label=Hadoop/425/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.regionserver.compactions.TestFIFOCompactionPolicy-output.txt


Please take a look see when you get a chance. Thanks.





> Compaction improvements: FIFO compaction policy
> ---
>
> Key: HBASE-14468
> URL: https://issues.apache.org/jira/browse/HBASE-14468
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14468-v1.patch, HBASE-14468-v10.patch, 
> HBASE-14468-v2.patch, HBASE-14468-v3.patch, HBASE-14468-v4.patch, 
> HBASE-14468-v5.patch, HBASE-14468-v6.patch, HBASE-14468-v7.patch, 
> HBASE-14468-v8.patch, HBASE-14468-v9.patch
>
>
> h2. FIFO Compaction
> h3. Introduction
> FIFO compaction policy selects only files which have all cells expired. The 
> column family MUST have non-default TTL. 
> Essentially, FIFO compactor does only one job: collects expired store files. 
> These are some applications which could benefit the most:
> # Use it for very high volume raw data which has low TTL and which is the 
> source of another data (after additional processing). Example: Raw 
> time-series vs. time-based rollup aggregates and compacted time-series. We 
> collect raw time-series and store them into CF with FIFO compaction policy, 
> periodically we run  task which creates rollup aggregates and compacts 
> time-series, the original raw data can be discarded after that.
> # Use it for data which can be kept entirely in a a block cache (RAM/SSD). 
> Say we have local SSD (1TB) which we can use as a block cache. No need for 
> compaction of a raw data at all.
> Because we do not do any real compaction, we do not use CPU and IO (disk and 
> network), we do not evict hot data from a block cache. The result: improved 
> throughput and latency both write and read.
> See: https://github.com/facebook/rocksdb/wiki/FIFO-compaction-style
> h3. To enable FIFO compaction policy
> For table:
> {code}
> HTableDescriptor desc = new HTableDescriptor(tableName);
> 
> desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
>   FIFOCompactionPolicy.class.getName());
> {code} 
> For CF:
> {code}
> HColumnDescriptor desc = new HColumnDescriptor(family);
> 
> desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
>   FIFOCompactionPolicy.class.getName());
> {code}
> Although region splitting is supported,  for optimal performance it should be 
> disabled, either by setting explicitly DisabledRegionSplitPolicy or by 
> setting ConstantSizeRegionSplitPolicy and very large max region size. You 
> will have to increase to a very large number store's blocking file number : 
> *hbase.hstore.blockingStoreFiles* as well.
>  
> h3. Limitations
> Do not use FIFO compaction if :
> * Table/CF has MIN_VERSION > 0
> * Table/CF has TTL = FOREVER (HColumnDescriptor.DEFAULT_TTL)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14916) Add checkstyle_report.py to other branches

2015-12-09 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049315#comment-15049315
 ] 

Appy commented on HBASE-14916:
--

Ping.

> Add checkstyle_report.py to other branches
> --
>
> Key: HBASE-14916
> URL: https://issues.apache.org/jira/browse/HBASE-14916
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-14916-branch-1-v2.patch, 
> HBASE-14916-branch-1-v3.patch, HBASE-14916-branch-1.patch
>
>
> Given test-patch.sh is always run from master, and that it now uses 
> checkstyle_report.py, we should pull back the script to other branches too.
> Otherwise we see error like: 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/jenkins.build/dev-support/test-patch.sh:
>  line 662: 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase/dev-support/checkstyle_report.py:
>  No such file or directory
> [reference|https://builds.apache.org/job/PreCommit-HBASE-Build/16734//consoleFull]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14941) locate_region shell command

2015-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049497#comment-15049497
 ] 

Hadoop QA commented on HBASE-14941:
---

{color:red}-1 overall{color}.  

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16814//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16814//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16814//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16814//console

This message is automatically generated.

> locate_region shell command
> ---
>
> Key: HBASE-14941
> URL: https://issues.apache.org/jira/browse/HBASE-14941
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Trivial
> Attachments: HBASE-14941_branch-1.patch
>
>
> Sometimes it is helpful to get the region location given a specified key, 
> without having to scan meta and look at the keys.
> so, having in the shell something like:
> {noformat}
> hbase(main):008:0> locate_region 'testtb', 'z'
> HOST REGION   
> 
>  localhost:42006 {ENCODED => 7486fee0129f0e3a3e671fec4a4255d5, 
>   NAME => 
> 'testtb,m,1449508841130.7486fee0129f0e3a3e671fec4a4255d5.',
>   STARTKEY => 'm', ENDKEY => ''}  
> 1 row(s) in 0.0090 seconds
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14451) Move on to htrace-4.0.1 (from htrace-3.2.0)

2015-12-09 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14451:
--
Attachment: 14451v13.txt

Here is current state of this patch.

TODO:

1. Get tracings working.
2. Review patch to ensure no extra object creation especially when tracing is 
OFF
3. Work on the various trace paths through hbase to make sure they all connect 
up and tell a good story.

In another issue will work on making htrace a trace sink again in htrace4 
context.

> Move on to htrace-4.0.1 (from htrace-3.2.0)
> ---
>
> Key: HBASE-14451
> URL: https://issues.apache.org/jira/browse/HBASE-14451
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: stack
> Attachments: 14451.txt, 14451.v10.txt, 14451.v10.txt, 14451v11.patch, 
> 14451v13.txt, 14451v2.txt, 14451v3.txt, 14451v4.txt, 14451v5.txt, 
> 14451v6.txt, 14451v7.txt, 14451v8.txt, 14451v9.txt, 14551v12.patch
>
>
> htrace-4.0.0 was just release with a new API. Get up on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14946) Don't allow multi's to over run the max result size.

2015-12-09 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14946:
--
Attachment: HBASE-14946-v6.patch

> Don't allow multi's to over run the max result size.
> 
>
> Key: HBASE-14946
> URL: https://issues.apache.org/jira/browse/HBASE-14946
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Attachments: HBASE-14946-v1.patch, HBASE-14946-v2.patch, 
> HBASE-14946-v3.patch, HBASE-14946-v5.patch, HBASE-14946-v6.patch, 
> HBASE-14946.patch
>
>
> If a user puts a list of tons of different gets into a table we will then 
> send them along in a multi. The server un-wraps each get in the multi. While 
> no single get may be over the size limit the total might be.
> We should protect the server from this. 
> We should batch up on the server side so each RPC is smaller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14769) Remove unused functions and duplicate javadocs from HBaseAdmin

2015-12-09 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049316#comment-15049316
 ] 

Appy commented on HBASE-14769:
--

Ping.

> Remove unused functions and duplicate javadocs from HBaseAdmin 
> ---
>
> Key: HBASE-14769
> URL: https://issues.apache.org/jira/browse/HBASE-14769
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-14769-master-v2.patch, 
> HBASE-14769-master-v3.patch, HBASE-14769-master-v4.patch, 
> HBASE-14769-master-v5.patch, HBASE-14769-master-v6.patch, 
> HBASE-14769-master-v7.patch, HBASE-14769-master-v8.patch, 
> HBASE-14769-master-v9.patch, HBASE-14769-master.patch
>
>
> HBaseAdmin is marked private, so removing the functions not being used 
> anywhere.
> Also, the javadocs of overridden functions are same as corresponding ones in 
> Admin.java. Since javadocs are automatically inherited from the interface 
> class, we can remove these redundant 100s of lines.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14946) Don't allow multi's to over run the max result size.

2015-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049424#comment-15049424
 ] 

Hadoop QA commented on HBASE-14946:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12776623/HBASE-14946-v6.patch
  against master branch at commit 0e147a9d6e53e71ad2e57f512b4d3e1eeeac0b78.
  ATTACHMENT ID: 12776623

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
new checkstyle errors. Check build console for list of new errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  String EXCEPTIONS_MULTI_TOO_LARGE_DESC = "A response to a mulit request 
was too large and the rest of the requests will have to be retried.";
+  public static boolean hasMinimumVersion(HBaseProtos.VersionInfo versionInfo, 
int major, int minor) {

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16813//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16813//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16813//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16813//console

This message is automatically generated.

> Don't allow multi's to over run the max result size.
> 
>
> Key: HBASE-14946
> URL: https://issues.apache.org/jira/browse/HBASE-14946
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Attachments: HBASE-14946-v1.patch, HBASE-14946-v2.patch, 
> HBASE-14946-v3.patch, HBASE-14946-v5.patch, HBASE-14946-v6.patch, 
> HBASE-14946-v7.patch, HBASE-14946.patch
>
>
> If a user puts a list of tons of different gets into a table we will then 
> send them along in a multi. The server un-wraps each get in the multi. While 
> no single get may be over the size limit the total might be.
> We should protect the server from this. 
> We should batch up on the server side so each RPC is smaller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14724) Per column family numops metrics

2015-12-09 Thread Ashu Pachauri (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049458#comment-15049458
 ] 

Ashu Pachauri commented on HBASE-14724:
---

Any thoughts, guys? Here is the link to the review board:
https://reviews.facebook.net/D51777

> Per column family numops metrics
> 
>
> Key: HBASE-14724
> URL: https://issues.apache.org/jira/browse/HBASE-14724
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Attachments: HBASE-14724-1.patch, HBASE-14724.patch
>
>
> It will be nice to have per CF regionserver metrics for number of operations 
> i.e. per CF get, mutate, delete and scan numops.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14795) Enhance the spark-hbase scan operations

2015-12-09 Thread Zhan Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhan Zhang updated HBASE-14795:
---
Attachment: HBASE-14795-3.patch

> Enhance the spark-hbase scan operations
> ---
>
> Key: HBASE-14795
> URL: https://issues.apache.org/jira/browse/HBASE-14795
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Malaska
>Assignee: Zhan Zhang
>Priority: Minor
> Attachments: 
> 0001-HBASE-14795-Enhance-the-spark-hbase-scan-operations.patch, 
> HBASE-14795-1.patch, HBASE-14795-2.patch, HBASE-14795-3.patch
>
>
> This is a sub-jira of HBASE-14789.  This jira is to focus on the replacement 
> of TableInputFormat for a more custom scan implementation that will make the 
> following use case more effective.
> Use case:
> In the case you have multiple scan ranges on a single table with in a single 
> query.  TableInputFormat will scan the the outer range of the scan start and 
> end range where this implementation can be more pointed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14946) Don't allow multi's to over run the max result size.

2015-12-09 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14946:
--
Attachment: HBASE-14946-v8.patch

Checkstyle you're on my last nerve.

> Don't allow multi's to over run the max result size.
> 
>
> Key: HBASE-14946
> URL: https://issues.apache.org/jira/browse/HBASE-14946
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Attachments: HBASE-14946-v1.patch, HBASE-14946-v2.patch, 
> HBASE-14946-v3.patch, HBASE-14946-v5.patch, HBASE-14946-v6.patch, 
> HBASE-14946-v7.patch, HBASE-14946-v8.patch, HBASE-14946.patch
>
>
> If a user puts a list of tons of different gets into a table we will then 
> send them along in a multi. The server un-wraps each get in the multi. While 
> no single get may be over the size limit the total might be.
> We should protect the server from this. 
> We should batch up on the server side so each RPC is smaller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14946) Don't allow multi's to over run the max result size.

2015-12-09 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14946:
--
Attachment: HBASE-14946-v7.patch

https://reviews.facebook.net/D51771

> Don't allow multi's to over run the max result size.
> 
>
> Key: HBASE-14946
> URL: https://issues.apache.org/jira/browse/HBASE-14946
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Attachments: HBASE-14946-v1.patch, HBASE-14946-v2.patch, 
> HBASE-14946-v3.patch, HBASE-14946-v5.patch, HBASE-14946-v6.patch, 
> HBASE-14946-v7.patch, HBASE-14946.patch
>
>
> If a user puts a list of tons of different gets into a table we will then 
> send them along in a multi. The server un-wraps each get in the multi. While 
> no single get may be over the size limit the total might be.
> We should protect the server from this. 
> We should batch up on the server side so each RPC is smaller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14451) Move on to htrace-4.0.1 (from htrace-3.2.0)

2015-12-09 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049355#comment-15049355
 ] 

stack commented on HBASE-14451:
---

Sean adding htrace to YCSB so need to make that hookup work all the ways 
through.

> Move on to htrace-4.0.1 (from htrace-3.2.0)
> ---
>
> Key: HBASE-14451
> URL: https://issues.apache.org/jira/browse/HBASE-14451
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: stack
> Attachments: 14451.txt, 14451.v10.txt, 14451.v10.txt, 14451v11.patch, 
> 14451v13.txt, 14451v2.txt, 14451v3.txt, 14451v4.txt, 14451v5.txt, 
> 14451v6.txt, 14451v7.txt, 14451v8.txt, 14451v9.txt, 14551v12.patch
>
>
> htrace-4.0.0 was just release with a new API. Get up on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14451) Move on to htrace-4.0.1 (from htrace-3.2.0)

2015-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049378#comment-15049378
 ] 

Hadoop QA commented on HBASE-14451:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12776626/14451v13.txt
  against master branch at commit 0e147a9d6e53e71ad2e57f512b4d3e1eeeac0b78.
  ATTACHMENT ID: 12776626

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 50 new 
or modified tests.

{color:red}-1 javac{color}.  The patch appears to cause mvn compile goal to 
fail with Hadoop version 2.4.0.

Compilation errors resume:
[ERROR] COMPILATION ERROR : 
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/RingBufferTruck.java:[22,25]
 cannot find symbol
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.2:compile (default-compile) on 
project hbase-server: Compilation failure
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/RingBufferTruck.java:[22,25]
 cannot find symbol
[ERROR] symbol:   class Span
[ERROR] location: package org.apache.htrace
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hbase-server


Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16816//console

This message is automatically generated.

> Move on to htrace-4.0.1 (from htrace-3.2.0)
> ---
>
> Key: HBASE-14451
> URL: https://issues.apache.org/jira/browse/HBASE-14451
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: stack
> Attachments: 14451.txt, 14451.v10.txt, 14451.v10.txt, 14451v11.patch, 
> 14451v13.txt, 14451v2.txt, 14451v3.txt, 14451v4.txt, 14451v5.txt, 
> 14451v6.txt, 14451v7.txt, 14451v8.txt, 14451v9.txt, 14551v12.patch
>
>
> htrace-4.0.0 was just release with a new API. Get up on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13153) Bulk Loaded HFile Replication

2015-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049413#comment-15049413
 ] 

Hadoop QA commented on HBASE-13153:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12776600/HBASE-13153-v20.patch
  against master branch at commit 0e147a9d6e53e71ad2e57f512b4d3e1eeeac0b78.
  ATTACHMENT ID: 12776600

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 42 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16811//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16811//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16811//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16811//console

This message is automatically generated.

> Bulk Loaded HFile Replication
> -
>
> Key: HBASE-13153
> URL: https://issues.apache.org/jira/browse/HBASE-13153
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: sunhaitao
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-13153-branch-1-v20.patch, HBASE-13153-v1.patch, 
> HBASE-13153-v10.patch, HBASE-13153-v11.patch, HBASE-13153-v12.patch, 
> HBASE-13153-v13.patch, HBASE-13153-v14.patch, HBASE-13153-v15.patch, 
> HBASE-13153-v16.patch, HBASE-13153-v17.patch, HBASE-13153-v18.patch, 
> HBASE-13153-v19.patch, HBASE-13153-v2.patch, HBASE-13153-v20.patch, 
> HBASE-13153-v3.patch, HBASE-13153-v4.patch, HBASE-13153-v5.patch, 
> HBASE-13153-v6.patch, HBASE-13153-v7.patch, HBASE-13153-v8.patch, 
> HBASE-13153-v9.patch, HBASE-13153.patch, HBase Bulk Load 
> Replication-v1-1.pdf, HBase Bulk Load Replication-v2.pdf, HBase Bulk Load 
> Replication-v3.pdf, HBase Bulk Load Replication.pdf, HDFS_HA_Solution.PNG
>
>
> Currently we plan to use HBase Replication feature to deal with disaster 
> tolerance scenario.But we encounter an issue that we will use bulkload very 
> frequently,because bulkload bypass write path, and will not generate WAL, so 
> the data will not be replicated to backup cluster. It's inappropriate to 
> bukload twice both on active cluster and backup cluster. So i advise do some 
> modification to bulkload feature to enable bukload to both active cluster and 
> backup cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13153) Bulk Loaded HFile Replication

2015-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049414#comment-15049414
 ] 

Hadoop QA commented on HBASE-13153:
---

{color:green}+1 overall{color}.  

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16812//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16812//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16812//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16812//console

This message is automatically generated.

> Bulk Loaded HFile Replication
> -
>
> Key: HBASE-13153
> URL: https://issues.apache.org/jira/browse/HBASE-13153
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: sunhaitao
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-13153-branch-1-v20.patch, HBASE-13153-v1.patch, 
> HBASE-13153-v10.patch, HBASE-13153-v11.patch, HBASE-13153-v12.patch, 
> HBASE-13153-v13.patch, HBASE-13153-v14.patch, HBASE-13153-v15.patch, 
> HBASE-13153-v16.patch, HBASE-13153-v17.patch, HBASE-13153-v18.patch, 
> HBASE-13153-v19.patch, HBASE-13153-v2.patch, HBASE-13153-v20.patch, 
> HBASE-13153-v3.patch, HBASE-13153-v4.patch, HBASE-13153-v5.patch, 
> HBASE-13153-v6.patch, HBASE-13153-v7.patch, HBASE-13153-v8.patch, 
> HBASE-13153-v9.patch, HBASE-13153.patch, HBase Bulk Load 
> Replication-v1-1.pdf, HBase Bulk Load Replication-v2.pdf, HBase Bulk Load 
> Replication-v3.pdf, HBase Bulk Load Replication.pdf, HDFS_HA_Solution.PNG
>
>
> Currently we plan to use HBase Replication feature to deal with disaster 
> tolerance scenario.But we encounter an issue that we will use bulkload very 
> frequently,because bulkload bypass write path, and will not generate WAL, so 
> the data will not be replicated to backup cluster. It's inappropriate to 
> bukload twice both on active cluster and backup cluster. So i advise do some 
> modification to bulkload feature to enable bukload to both active cluster and 
> backup cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14960) Fallback to using default RPCControllerFactory if class cannot be loaded

2015-12-09 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-14960:
--
Attachment: hbase-14960_v1.patch

v1 patch. Catch class not found and fallback. 

> Fallback to using default RPCControllerFactory if class cannot be loaded
> 
>
> Key: HBASE-14960
> URL: https://issues.apache.org/jira/browse/HBASE-14960
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: hbase-14960_v1.patch
>
>
> In Phoenix + HBase clusters, the hbase-site.xml configuration will point to a 
> custom rpc controller factory which is a Phoenix-specific one to configure 
> the priorities for index and system catalog table. 
> However, sometimes these Phoenix-enabled clusters are used from pure-HBase 
> client applications resulting in ClassNotFoundExceptions in application code 
> or MapReduce jobs. Since hbase configuration is shared between 
> Phoenix-clients and HBase clients, having different configurations at the 
> client side is hard. 
> We can instead try to load up the RPCControllerFactory from conf, and if not 
> found, fallback to the default one (in case this is a pure HBase client). In 
> case Phoenix is already in the classpath, it will work as usual. 
> This does not affect the rpc scheduler factory since it is only used at the 
> server side. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14960) Fallback to using default RPCControllerFactory if class cannot be loaded

2015-12-09 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049606#comment-15049606
 ] 

Devaraj Das commented on HBASE-14960:
-

Looks fine to me.

> Fallback to using default RPCControllerFactory if class cannot be loaded
> 
>
> Key: HBASE-14960
> URL: https://issues.apache.org/jira/browse/HBASE-14960
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: hbase-14960_v1.patch
>
>
> In Phoenix + HBase clusters, the hbase-site.xml configuration will point to a 
> custom rpc controller factory which is a Phoenix-specific one to configure 
> the priorities for index and system catalog table. 
> However, sometimes these Phoenix-enabled clusters are used from pure-HBase 
> client applications resulting in ClassNotFoundExceptions in application code 
> or MapReduce jobs. Since hbase configuration is shared between 
> Phoenix-clients and HBase clients, having different configurations at the 
> client side is hard. 
> We can instead try to load up the RPCControllerFactory from conf, and if not 
> found, fallback to the default one (in case this is a pure HBase client). In 
> case Phoenix is already in the classpath, it will work as usual. 
> This does not affect the rpc scheduler factory since it is only used at the 
> server side. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14946) Don't allow multi's to over run the max result size.

2015-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049744#comment-15049744
 ] 

Hadoop QA commented on HBASE-14946:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12776648/HBASE-14946-v8.patch
  against master branch at commit 0e147a9d6e53e71ad2e57f512b4d3e1eeeac0b78.
  ATTACHMENT ID: 12776648

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
new checkstyle errors. Check build console for list of new errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16818//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16818//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16818//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16818//console

This message is automatically generated.

> Don't allow multi's to over run the max result size.
> 
>
> Key: HBASE-14946
> URL: https://issues.apache.org/jira/browse/HBASE-14946
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Attachments: HBASE-14946-v1.patch, HBASE-14946-v2.patch, 
> HBASE-14946-v3.patch, HBASE-14946-v5.patch, HBASE-14946-v6.patch, 
> HBASE-14946-v7.patch, HBASE-14946-v8.patch, HBASE-14946-v9.patch, 
> HBASE-14946.patch
>
>
> If a user puts a list of tons of different gets into a table we will then 
> send them along in a multi. The server un-wraps each get in the multi. While 
> no single get may be over the size limit the total might be.
> We should protect the server from this. 
> We should batch up on the server side so each RPC is smaller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14954) IllegalArgumentException was thrown when doing online configuration change in CompactSplitThread

2015-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049784#comment-15049784
 ] 

Hudson commented on HBASE-14954:


FAILURE: Integrated in HBase-1.1-JDK8 #1705 (See 
[https://builds.apache.org/job/HBase-1.1-JDK8/1705/])
HBASE-14954 IllegalArgumentException was thrown when doing online (tedyu: rev 
3f779e4d633c24f1a32bd4ee1754a84198855376)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactSplitThread.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java


> IllegalArgumentException was thrown when doing online configuration change in 
> CompactSplitThread
> 
>
> Key: HBASE-14954
> URL: https://issues.apache.org/jira/browse/HBASE-14954
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, regionserver
>Affects Versions: 1.1.2
>Reporter: Victor Xu
>Assignee: Victor Xu
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-14954-v1.patch
>
>
> Online configuration change is a terrific feature for HBase administrators. 
> However, when we use this feature to tune compaction thread pool size online, 
> it triggered a IllegalArgumentException. The cause is the order of 
> setMaximumPoolSize() and setCorePoolSize() of ThreadPoolExecutor: when 
> turning parameters bigger, we should setMax first; when turning parameters 
> smaller, we need to setCore first. Besides, there is also a copy-code bug in 
> merge and split thread pool which I will fix together.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14960) Fallback to using default RPCControllerFactory if class cannot be loaded

2015-12-09 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-14960:
-

 Summary: Fallback to using default RPCControllerFactory if class 
cannot be loaded
 Key: HBASE-14960
 URL: https://issues.apache.org/jira/browse/HBASE-14960
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0, 1.2.0, 1.3.0


In Phoenix + HBase clusters, the hbase-site.xml configuration will point to a 
custom rpc controller factory which is a Phoenix-specific one to configure the 
priorities for index and system catalog table. 

However, sometimes these Phoenix-enabled clusters are used from pure-HBase 
client applications resulting in ClassNotFoundExceptions in application code or 
MapReduce jobs. Since hbase configuration is shared between Phoenix-clients and 
HBase clients, having different configurations at the client side is hard. 

We can instead try to load up the RPCControllerFactory from conf, and if not 
found, fallback to the default one (in case this is a pure HBase client). In 
case Phoenix is already in the classpath, it will work as usual. 

This does not affect the rpc scheduler factory since it is only used at the 
server side. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14851) Add test showing how to use TTL from thrift

2015-12-09 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049650#comment-15049650
 ] 

Elliott Clark commented on HBASE-14851:
---

bq.The below is a bit obnoxious (says 30 in comment – fix on commit)
ttlTimeMs is 2 seconds. So 2 * 15 == 30 seconds

> Add test showing how to use TTL from thrift
> ---
>
> Key: HBASE-14851
> URL: https://issues.apache.org/jira/browse/HBASE-14851
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0
>
> Attachments: HBASE-14851-v1.patch, HBASE-14851.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14866) VerifyReplication should use peer configuration in peer connection

2015-12-09 Thread Gary Helmling (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Helmling updated HBASE-14866:
--
Attachment: hbase-14866-branch-1-v1.patch

Attaching the patch applied to branch-1 and branch-1.2.

> VerifyReplication should use peer configuration in peer connection
> --
>
> Key: HBASE-14866
> URL: https://issues.apache.org/jira/browse/HBASE-14866
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: Gary Helmling
>Assignee: Gary Helmling
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14866.patch, HBASE-14866_v1.patch, 
> hbase-14866-branch-1-v1.patch, hbase-14866-v4.patch, hbase-14866-v5.patch, 
> hbase-14866-v6.patch, hbase-14866_v2.patch, hbase-14866_v3.patch
>
>
> VerifyReplication uses the replication peer's configuration to construct the 
> ZooKeeper quorum address for the peer connection.  However, other 
> configuration properties in the peer's configuration are dropped.  It should 
> merge all configuration properties from the {{ReplicationPeerConfig}} when 
> creating the peer connection and obtaining a credentials for the peer cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14866) VerifyReplication should use peer configuration in peer connection

2015-12-09 Thread Gary Helmling (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Helmling updated HBASE-14866:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to master, branch-1, and branch-1.2.

> VerifyReplication should use peer configuration in peer connection
> --
>
> Key: HBASE-14866
> URL: https://issues.apache.org/jira/browse/HBASE-14866
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: Gary Helmling
>Assignee: Gary Helmling
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14866.patch, HBASE-14866_v1.patch, 
> hbase-14866-branch-1-v1.patch, hbase-14866-v4.patch, hbase-14866-v5.patch, 
> hbase-14866-v6.patch, hbase-14866_v2.patch, hbase-14866_v3.patch
>
>
> VerifyReplication uses the replication peer's configuration to construct the 
> ZooKeeper quorum address for the peer connection.  However, other 
> configuration properties in the peer's configuration are dropped.  It should 
> merge all configuration properties from the {{ReplicationPeerConfig}} when 
> creating the peer connection and obtaining a credentials for the peer cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-10390) expose checkAndPut/Delete custom comparators thru HTable

2015-12-09 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-10390:
--
Attachment: HBASE-10390-v1.patch

Patch v1.

> expose checkAndPut/Delete custom comparators thru HTable
> 
>
> Key: HBASE-10390
> URL: https://issues.apache.org/jira/browse/HBASE-10390
> Project: HBase
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Vladimir Rodionov
> Attachments: HBASE-10390-v1.patch
>
>
> checkAndPut/Delete appear to support custom comparators. However, thru 
> HTable, there's no way to pass one, it always creates BinaryComparator from 
> value. It would be good to expose the custom ones in the API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-10390) expose checkAndPut/Delete custom comparators thru HTable

2015-12-09 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-10390:
--
Status: Patch Available  (was: Open)

> expose checkAndPut/Delete custom comparators thru HTable
> 
>
> Key: HBASE-10390
> URL: https://issues.apache.org/jira/browse/HBASE-10390
> Project: HBase
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Vladimir Rodionov
> Attachments: HBASE-10390-v1.patch
>
>
> checkAndPut/Delete appear to support custom comparators. However, thru 
> HTable, there's no way to pass one, it always creates BinaryComparator from 
> value. It would be good to expose the custom ones in the API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14030) HBase Backup/Restore Phase 1

2015-12-09 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14030:
--
Attachment: (was: HBASE-14030-v20.patch)

> HBase Backup/Restore Phase 1
> 
>
> Key: HBASE-14030
> URL: https://issues.apache.org/jira/browse/HBASE-14030
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: HBASE-14030-v0.patch, HBASE-14030-v1.patch, 
> HBASE-14030-v10.patch, HBASE-14030-v11.patch, HBASE-14030-v12.patch, 
> HBASE-14030-v13.patch, HBASE-14030-v14.patch, HBASE-14030-v15.patch, 
> HBASE-14030-v17.patch, HBASE-14030-v18.patch, HBASE-14030-v2.patch, 
> HBASE-14030-v20.patch, HBASE-14030-v3.patch, HBASE-14030-v4.patch, 
> HBASE-14030-v5.patch, HBASE-14030-v6.patch, HBASE-14030-v7.patch, 
> HBASE-14030-v8.patch
>
>
> This is the umbrella ticket for Backup/Restore Phase 1. See HBASE-7912 design 
> doc for the phase description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14866) VerifyReplication should use peer configuration in peer connection

2015-12-09 Thread Gary Helmling (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Helmling updated HBASE-14866:
--
Attachment: hbase-14866-v6.patch

Attaching updated patch with minor checkstyle fix that I committed to master.

> VerifyReplication should use peer configuration in peer connection
> --
>
> Key: HBASE-14866
> URL: https://issues.apache.org/jira/browse/HBASE-14866
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: Gary Helmling
>Assignee: Gary Helmling
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14866.patch, HBASE-14866_v1.patch, 
> hbase-14866-v4.patch, hbase-14866-v5.patch, hbase-14866-v6.patch, 
> hbase-14866_v2.patch, hbase-14866_v3.patch
>
>
> VerifyReplication uses the replication peer's configuration to construct the 
> ZooKeeper quorum address for the peer connection.  However, other 
> configuration properties in the peer's configuration are dropped.  It should 
> merge all configuration properties from the {{ReplicationPeerConfig}} when 
> creating the peer connection and obtaining a credentials for the peer cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-14451) Move on to htrace-4.0.1 (from htrace-3.2.0)

2015-12-09 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049340#comment-15049340
 ] 

stack edited comment on HBASE-14451 at 12/10/15 12:53 AM:
--

Giving up on this for the moment. I am not sending traces so need to do more 
debug. Fixed a bunch of NPEs because there was not tracer in the particular 
context but it doesn't seem like we are generating any spans at the moment post 
redo to fit the htrace-4 semantic.

I was running with htrace DEBUG on and with following config:

{code}
 24  
 25  hbase.htrace.span.receiver.classes
 26  org.apache.htrace.impl.HTracedSpanReceiver
 27  
 28  The class name of the HTrace SpanReceivers to use inside
 29  HBase. If there are no class names supplied here, tracings will not
 30 be emitted.
 31  
 32  
 33  
 34
 35  hbase.htrace.htraced.receiver.address
 36
 37localhost:9075
 38  
 39  
 40
 41  hbase.htraced.error.log.period.ms
 42
 431000
 44  
 45  
 46  hbase.htrace.sampler.classes
 47  org.apache.htrace.core.AlwaysSampler
 48  Sampler to use when tracing. Default is
 49  org.apache.htrace.core.NeverSampler. Other options are
 50  org.apache.htrace.core.AlwaysSampler and
 51  org.apache.htrace.core.ProbabilitySampler. See htrace-core
 52  for options provided by htrace.
 53  
 54  
{code}

Attached is latest patch.


was (Author: stack):
Giving up on this for the moment. I am not sending traces so need to do more 
debug. Fixed a bunch of NPEs because there was not tracer in the particular 
context but it doesn't seem like we are generating any spans at the moment post 
redo to fit the htrace-4 semantic.

I was running with htrace DEBUG on and with following config:

{code}
+ 
+ hbase.htrace.htraced.span.receiver.classes
+ org.apache.htrace.impl.HTracedSpanReceiver
+ 
+ The class name of the HTrace SpanReceivers to use inside
+ HBase. If there are no class names supplied here, tracings will not be 
emitted.
+ 
+ 
+ 
+   
+ hbase.htrace.htraced.receiver.address
+   
+   
+ localhost:9075
+   
+ 
+ 
+   
+ hbase.htraced.error.log.period.ms
+   
+   
+ 1000
+   
+ 
+ 
+ hbase.htrace.sampler.classes
+ org.apache.htrace.core.AlwaysSampler
+ Sampler to use when tracing. Default is
+ org.apache.htrace.core.NeverSampler. Other options are
+ org.apache.htrace.core.AlwaysSampler and
+ org.apache.htrace.core.ProbabilitySampler. See htrace-core
+ for options provided by htrace.
+ 
+ 
{code}

Attached is latest patch.

> Move on to htrace-4.0.1 (from htrace-3.2.0)
> ---
>
> Key: HBASE-14451
> URL: https://issues.apache.org/jira/browse/HBASE-14451
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: stack
> Attachments: 14451.txt, 14451.v10.txt, 14451.v10.txt, 14451v11.patch, 
> 14451v13.txt, 14451v2.txt, 14451v3.txt, 14451v4.txt, 14451v5.txt, 
> 14451v6.txt, 14451v7.txt, 14451v8.txt, 14451v9.txt, 14551v12.patch
>
>
> htrace-4.0.0 was just release with a new API. Get up on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14960) Fallback to using default RPCControllerFactory if class cannot be loaded

2015-12-09 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049883#comment-15049883
 ] 

Ashish Singhi commented on HBASE-14960:
---

{code}
+import org.mortbay.log.Log;
{code}
We use {{org.apache.commons.logging.Log}}, right ?

> Fallback to using default RPCControllerFactory if class cannot be loaded
> 
>
> Key: HBASE-14960
> URL: https://issues.apache.org/jira/browse/HBASE-14960
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: hbase-14960_v1.patch
>
>
> In Phoenix + HBase clusters, the hbase-site.xml configuration will point to a 
> custom rpc controller factory which is a Phoenix-specific one to configure 
> the priorities for index and system catalog table. 
> However, sometimes these Phoenix-enabled clusters are used from pure-HBase 
> client applications resulting in ClassNotFoundExceptions in application code 
> or MapReduce jobs. Since hbase configuration is shared between 
> Phoenix-clients and HBase clients, having different configurations at the 
> client side is hard. 
> We can instead try to load up the RPCControllerFactory from conf, and if not 
> found, fallback to the default one (in case this is a pure HBase client). In 
> case Phoenix is already in the classpath, it will work as usual. 
> This does not affect the rpc scheduler factory since it is only used at the 
> server side. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14947) WALProcedureStore improvements

2015-12-09 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-14947:

Priority: Blocker  (was: Minor)

> WALProcedureStore improvements
> --
>
> Key: HBASE-14947
> URL: https://issues.apache.org/jira/browse/HBASE-14947
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Reporter: Ashu Pachauri
>Assignee: Matteo Bertozzi
>Priority: Blocker
> Attachments: HBASE-14947-v0.patch, HBASE-14947-v1.patch
>
>
> We ended up with a deadlock in HBASE-14943, with the storeTracker and lock 
> acquired in reverse order by syncLoop() and insert/update/delete. In the 
> syncLoop() with don't need the lock when we try to roll or removeInactive. 
> also we can move the insert/update/delete tracker check in the syncLoop 
> avoiding to the extra lock operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-10390) expose checkAndPut/Delete custom comparators thru HTable

2015-12-09 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-10390:
--
Fix Version/s: 2.0.0
  Component/s: Client

> expose checkAndPut/Delete custom comparators thru HTable
> 
>
> Key: HBASE-10390
> URL: https://issues.apache.org/jira/browse/HBASE-10390
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Sergey Shelukhin
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-10390-v1.patch
>
>
> checkAndPut/Delete appear to support custom comparators. However, thru 
> HTable, there's no way to pass one, it always creates BinaryComparator from 
> value. It would be good to expose the custom ones in the API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14960) Fallback to using default RPCControllerFactory if class cannot be loaded

2015-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049862#comment-15049862
 ] 

Hadoop QA commented on HBASE-14960:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12776661/hbase-14960_v1.patch
  against master branch at commit 0e147a9d6e53e71ad2e57f512b4d3e1eeeac0b78.
  ATTACHMENT ID: 12776661

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16819//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16819//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16819//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16819//console

This message is automatically generated.

> Fallback to using default RPCControllerFactory if class cannot be loaded
> 
>
> Key: HBASE-14960
> URL: https://issues.apache.org/jira/browse/HBASE-14960
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: hbase-14960_v1.patch
>
>
> In Phoenix + HBase clusters, the hbase-site.xml configuration will point to a 
> custom rpc controller factory which is a Phoenix-specific one to configure 
> the priorities for index and system catalog table. 
> However, sometimes these Phoenix-enabled clusters are used from pure-HBase 
> client applications resulting in ClassNotFoundExceptions in application code 
> or MapReduce jobs. Since hbase configuration is shared between 
> Phoenix-clients and HBase clients, having different configurations at the 
> client side is hard. 
> We can instead try to load up the RPCControllerFactory from conf, and if not 
> found, fallback to the default one (in case this is a pure HBase client). In 
> case Phoenix is already in the classpath, it will work as usual. 
> This does not affect the rpc scheduler factory since it is only used at the 
> server side. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-6721) RegionServer Group based Assignment

2015-12-09 Thread Francis Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049653#comment-15049653
 ] 

Francis Liu commented on HBASE-6721:


[~te...@apache.org] Looks like TestSimpleRegionNormalizer is using negative 
port numbers for ServerName. Should be a simple fix to update the test. How are 
you able to run TestShell? I only see an abstract class.

> RegionServer Group based Assignment
> ---
>
> Key: HBASE-6721
> URL: https://issues.apache.org/jira/browse/HBASE-6721
> Project: HBase
>  Issue Type: New Feature
>Reporter: Francis Liu
>Assignee: Francis Liu
>  Labels: hbase-6721
> Attachments: 6721-master-webUI.patch, HBASE-6721 
> GroupBasedLoadBalancer Sequence Diagram.xml, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721_0.98_2.patch, HBASE-6721_10.patch, HBASE-6721_11.patch, 
> HBASE-6721_12.patch, HBASE-6721_13.patch, HBASE-6721_14.patch, 
> HBASE-6721_15.patch, HBASE-6721_8.patch, HBASE-6721_9.patch, 
> HBASE-6721_9.patch, HBASE-6721_94.patch, HBASE-6721_94.patch, 
> HBASE-6721_94_2.patch, HBASE-6721_94_3.patch, HBASE-6721_94_3.patch, 
> HBASE-6721_94_4.patch, HBASE-6721_94_5.patch, HBASE-6721_94_6.patch, 
> HBASE-6721_94_7.patch, HBASE-6721_98_1.patch, HBASE-6721_98_2.patch, 
> HBASE-6721_hbase-6721_addendum.patch, HBASE-6721_trunk.patch, 
> HBASE-6721_trunk.patch, HBASE-6721_trunk.patch, HBASE-6721_trunk1.patch, 
> HBASE-6721_trunk2.patch, balanceCluster Sequence Diagram.svg, 
> hbase-6721-v15-branch-1.1.patch, hbase-6721-v16.patch, hbase-6721-v17.patch, 
> hbase-6721-v18.patch, immediateAssignments Sequence Diagram.svg, 
> randomAssignment Sequence Diagram.svg, retainAssignment Sequence Diagram.svg, 
> roundRobinAssignment Sequence Diagram.svg
>
>
> In multi-tenant deployments of HBase, it is likely that a RegionServer will 
> be serving out regions from a number of different tables owned by various 
> client applications. Being able to group a subset of running RegionServers 
> and assign specific tables to it, provides a client application a level of 
> isolation and resource allocation.
> The proposal essentially is to have an AssignmentManager which is aware of 
> RegionServer groups and assigns tables to region servers based on groupings. 
> Load balancing will occur on a per group basis as well. 
> This is essentially a simplification of the approach taken in HBASE-4120. See 
> attached document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14851) Add test showing how to use TTL from thrift

2015-12-09 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14851:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.3.0
   1.2.0
   Status: Resolved  (was: Patch Available)

> Add test showing how to use TTL from thrift
> ---
>
> Key: HBASE-14851
> URL: https://issues.apache.org/jira/browse/HBASE-14851
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14851-v1.patch, HBASE-14851.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14954) IllegalArgumentException was thrown when doing online configuration change in CompactSplitThread

2015-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049681#comment-15049681
 ] 

Hudson commented on HBASE-14954:


FAILURE: Integrated in HBase-1.1-JDK7 #1617 (See 
[https://builds.apache.org/job/HBase-1.1-JDK7/1617/])
HBASE-14954 IllegalArgumentException was thrown when doing online (tedyu: rev 
3f779e4d633c24f1a32bd4ee1754a84198855376)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactSplitThread.java


> IllegalArgumentException was thrown when doing online configuration change in 
> CompactSplitThread
> 
>
> Key: HBASE-14954
> URL: https://issues.apache.org/jira/browse/HBASE-14954
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, regionserver
>Affects Versions: 1.1.2
>Reporter: Victor Xu
>Assignee: Victor Xu
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-14954-v1.patch
>
>
> Online configuration change is a terrific feature for HBase administrators. 
> However, when we use this feature to tune compaction thread pool size online, 
> it triggered a IllegalArgumentException. The cause is the order of 
> setMaximumPoolSize() and setCorePoolSize() of ThreadPoolExecutor: when 
> turning parameters bigger, we should setMax first; when turning parameters 
> smaller, we need to setCore first. Besides, there is also a copy-code bug in 
> merge and split thread pool which I will fix together.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14958) regionserver.HRegionServer: Master passed us a different hostname to use; was=n04docker2, but now=192.168.3.114

2015-12-09 Thread Yong Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049753#comment-15049753
 ] 

Yong Zheng commented on HBASE-14958:


Thanks for Nick so prompt response. 

After checking the prerequisites, DNS can't solve the issue. 

in my virtualized hbase cluster, it has only 4 nodes: 
n03docker1(172.17.1.2)
n03docker2(172.17.1.3)

n04docker1(172.17.2.2)
n04docker2(172.17.2.3)

DNS is not configured but I configured /etc/hosts:
# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

172.17.1.1   c3m3n03docker.gpfs.net c3m3n03docker<== the br0 on the 
physical node c3m3n03
172.17.2.1   c3m3n04docker.gpfs.net c3m3n04docker <== the br0 on 
the physical node c3m3n04

172.17.1.2   n03docker1.gpfs.net n03docker1
172.17.1.3   n03docker2.gpfs.net n03docker2
172.17.2.2   n04docker1.gpfs.net n04docker1
172.17.2.3   n04docker2.gpfs.net n04docker2

so, DNS resolution works(I do see the correct name for n03docker1 and 
n03docker2). However, for any region servers located over other physical 
machines, all network packet from those region servers  will be source NATed 
with the IP of c3m3n04(192.168.3.114)(that means, all IP packet will be changed 
with the source IP as 192.168.3.114. so that these packets can be transferred 
to the physical node c3m3n03).

for hbase master, 192.168.3.113 or 192.168.3.114 are invisible for hbase. thus, 
DNS resolution for 192.168.3.114 inside VM doesn't help this.  e.g. 
192.168.3.114's hostname should be c3m3n04, not n04docker1 or n04docker2.
if we configure DNS inside VM to map 192.168.3.114 into n04docker1 or 
n04docker2, this will mess up IP-hostname inside VM. Also, if we map 
192.168.3.114 into n04docker1, that means, we can't start the 2nd region server 
over the same physical node because they will be recognized as the physical 
node's IP address/hostname.

> regionserver.HRegionServer: Master passed us a different hostname to use; 
> was=n04docker2, but now=192.168.3.114
> ---
>
> Key: HBASE-14958
> URL: https://issues.apache.org/jira/browse/HBASE-14958
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
> Environment: physical machines: redhat7.1
> docker version: 1.9.1
>Reporter: Yong Zheng
>
> I have two physical machines: c3m3n03docker and c3m3n04docker.
> I started two docker instances per physical node. the topology is like:
> n03docker1(172.17.1.2)  -\
>   | br0(172.17.1.1)  +  c3m3n03
> n03docker2(172.17.1.3) -/
> n04docker1(172.17.2.2)  -\
>   | br0(172.17.2.1)  +  c3m3n04
> n04docker2(172.17.2.3) -/
> for physical machines, c3m3n03 is bundled with physical adapter enp11s0f0 
> with IP (192.168.3.113/16); c3m3n04 is bundled with physical adapter 
> enp11s0f0 with IP(192.168.3.114/16). these two physical adapters are 
> connecting to the same switch.
> Note: br0 is not bundled to physical adapter enp11s0f0  on both nodes. so, 
> all requests in 172.17.2.x will be source NAT as 192.168.3.114(c3m3n04) and 
> forwarded to c3m3n03.
> n03docker1: hbase(1.1.2) master
> n03docker2: region server
> n04docker1: region server
> n04docker2: region server
> I first start the n03docker1 and n03docker2, it works; after that, I start 
> n04docker2 and it will reported:
> 2015-12-09 08:01:58,259 ERROR 
> [regionserver/n04docker2.gpfs.net/172.17.2.3:16020] 
> regionserver.HRegionServer: Master passed us a different hostname to use; 
> was=n04docker2.gpfs.net, but now=192.168.3.114
> on the master logs:
> 2015-12-09 08:11:12,234 INFO  
> [PriorityRpcServer.handler=0,queue=0,port=16000] master.ServerManager: 
> Registering server=192.168.3.114,16020,144970721
> So, you see, when hbase master receives the requests from n04docker2, all 
> these requests are source NATed with 192.168.3.114(not 172.17.2.3).  and 
> hbase master passes 192.168.3.114 back to 172.17.2.3(n04docker2). Thus, 
> n04docker1(172.17.2.3) reported exceptions in logs.
> hbase doesn't support running in virtualization cluster? because SNAT is 
> widely used in virtualization. if hbase master get remote hostname/ip(thus 
> get 192.168.3.114) and pass it back to region server, it will hit this issues.
> HBASE-8667 doesn't fix this issue because the fix has been hbase 0.98(I'm 
> taking hbase 1.1.2).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14030) HBase Backup/Restore Phase 1

2015-12-09 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14030:
--
Status: Open  (was: Patch Available)

> HBase Backup/Restore Phase 1
> 
>
> Key: HBASE-14030
> URL: https://issues.apache.org/jira/browse/HBASE-14030
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: HBASE-14030-v0.patch, HBASE-14030-v1.patch, 
> HBASE-14030-v10.patch, HBASE-14030-v11.patch, HBASE-14030-v12.patch, 
> HBASE-14030-v13.patch, HBASE-14030-v14.patch, HBASE-14030-v15.patch, 
> HBASE-14030-v17.patch, HBASE-14030-v18.patch, HBASE-14030-v2.patch, 
> HBASE-14030-v20.patch, HBASE-14030-v3.patch, HBASE-14030-v4.patch, 
> HBASE-14030-v5.patch, HBASE-14030-v6.patch, HBASE-14030-v7.patch, 
> HBASE-14030-v8.patch
>
>
> This is the umbrella ticket for Backup/Restore Phase 1. See HBASE-7912 design 
> doc for the phase description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14030) HBase Backup/Restore Phase 1

2015-12-09 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14030:
--
Attachment: HBASE-14030-v20.patch

> HBase Backup/Restore Phase 1
> 
>
> Key: HBASE-14030
> URL: https://issues.apache.org/jira/browse/HBASE-14030
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: HBASE-14030-v0.patch, HBASE-14030-v1.patch, 
> HBASE-14030-v10.patch, HBASE-14030-v11.patch, HBASE-14030-v12.patch, 
> HBASE-14030-v13.patch, HBASE-14030-v14.patch, HBASE-14030-v15.patch, 
> HBASE-14030-v17.patch, HBASE-14030-v18.patch, HBASE-14030-v2.patch, 
> HBASE-14030-v20.patch, HBASE-14030-v3.patch, HBASE-14030-v4.patch, 
> HBASE-14030-v5.patch, HBASE-14030-v6.patch, HBASE-14030-v7.patch, 
> HBASE-14030-v8.patch
>
>
> This is the umbrella ticket for Backup/Restore Phase 1. See HBASE-7912 design 
> doc for the phase description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14030) HBase Backup/Restore Phase 1

2015-12-09 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14030:
--
Status: Patch Available  (was: Open)

> HBase Backup/Restore Phase 1
> 
>
> Key: HBASE-14030
> URL: https://issues.apache.org/jira/browse/HBASE-14030
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: HBASE-14030-v0.patch, HBASE-14030-v1.patch, 
> HBASE-14030-v10.patch, HBASE-14030-v11.patch, HBASE-14030-v12.patch, 
> HBASE-14030-v13.patch, HBASE-14030-v14.patch, HBASE-14030-v15.patch, 
> HBASE-14030-v17.patch, HBASE-14030-v18.patch, HBASE-14030-v2.patch, 
> HBASE-14030-v20.patch, HBASE-14030-v3.patch, HBASE-14030-v4.patch, 
> HBASE-14030-v5.patch, HBASE-14030-v6.patch, HBASE-14030-v7.patch, 
> HBASE-14030-v8.patch
>
>
> This is the umbrella ticket for Backup/Restore Phase 1. See HBASE-7912 design 
> doc for the phase description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14946) Don't allow multi's to over run the max result size.

2015-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049642#comment-15049642
 ] 

Hadoop QA commented on HBASE-14946:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12776625/HBASE-14946-v7.patch
  against master branch at commit 0e147a9d6e53e71ad2e57f512b4d3e1eeeac0b78.
  ATTACHMENT ID: 12776625

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
new checkstyle errors. Check build console for list of new errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  String EXCEPTIONS_MULTI_TOO_LARGE_DESC = "A response to a mulit request 
was too large and the rest of the requests will have to be retried.";
+  public static boolean hasMinimumVersion(HBaseProtos.VersionInfo versionInfo, 
int major, int minor) {

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16815//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16815//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16815//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16815//console

This message is automatically generated.

> Don't allow multi's to over run the max result size.
> 
>
> Key: HBASE-14946
> URL: https://issues.apache.org/jira/browse/HBASE-14946
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Attachments: HBASE-14946-v1.patch, HBASE-14946-v2.patch, 
> HBASE-14946-v3.patch, HBASE-14946-v5.patch, HBASE-14946-v6.patch, 
> HBASE-14946-v7.patch, HBASE-14946-v8.patch, HBASE-14946.patch
>
>
> If a user puts a list of tons of different gets into a table we will then 
> send them along in a multi. The server un-wraps each get in the multi. While 
> no single get may be over the size limit the total might be.
> We should protect the server from this. 
> We should batch up on the server side so each RPC is smaller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14942) Allow turning off BoundedByteBufferPool

2015-12-09 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14942:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks for the reviews

> Allow turning off BoundedByteBufferPool
> ---
>
> Key: HBASE-14942
> URL: https://issues.apache.org/jira/browse/HBASE-14942
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14942.patch
>
>
> The G1 does a great job of compacting, there's no reason to use the 
> BoundedByteBufferPool when the JVM can it for us. So we should allow turning 
> this off for people who are running new jvm's where the G1 is working well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14745) Shade the last few dependencies in hbase-shaded-client

2015-12-09 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049643#comment-15049643
 ] 

Elliott Clark commented on HBASE-14745:
---

Tested this and it looks better.
It also fixes the problem of not being able to build in release profile.

> Shade the last few dependencies in hbase-shaded-client
> --
>
> Key: HBASE-14745
> URL: https://issues.apache.org/jira/browse/HBASE-14745
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Blocker
> Fix For: 1.2.0
>
> Attachments: HBASE-14745-v1.patch, HBASE-14745.patch
>
>
> * junit
> * hadoop common



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14745) Shade the last few dependencies in hbase-shaded-client

2015-12-09 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049687#comment-15049687
 ] 

Sean Busbey commented on HBASE-14745:
-

+1, presuming we can't make the shading patterns pass the line length checks.

> Shade the last few dependencies in hbase-shaded-client
> --
>
> Key: HBASE-14745
> URL: https://issues.apache.org/jira/browse/HBASE-14745
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Blocker
> Fix For: 1.2.0
>
> Attachments: HBASE-14745-v1.patch, HBASE-14745.patch
>
>
> * junit
> * hadoop common



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14961) hbase-env.sh clobbers environment HBASE_OPTS

2015-12-09 Thread Nick Dimiduk (JIRA)
Nick Dimiduk created HBASE-14961:


 Summary: hbase-env.sh clobbers environment HBASE_OPTS
 Key: HBASE-14961
 URL: https://issues.apache.org/jira/browse/HBASE-14961
 Project: HBase
  Issue Type: Bug
  Components: scripts
Affects Versions: 1.1.2
Reporter: Nick Dimiduk
Priority: Minor


In hbase-env.sh we have

{noformat}
export HBASE_OPTS="-XX:+UseConcMarkSweepGC"
{noformat}

This clobbers any value provided on the cli. This makes it difficult to, ie, 
debug a job launched by the otherwise convenient {{bin/hbase}} class launcher 
functionality.

Looks like we've been through here before (HBASE-3423, HBASE-6888, HBASE-12021) 
-- maybe someone knows better the history or if this is by design.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14960) Fallback to using default RPCControllerFactory if class cannot be loaded

2015-12-09 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-14960:
--
Status: Patch Available  (was: Open)

> Fallback to using default RPCControllerFactory if class cannot be loaded
> 
>
> Key: HBASE-14960
> URL: https://issues.apache.org/jira/browse/HBASE-14960
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: hbase-14960_v1.patch
>
>
> In Phoenix + HBase clusters, the hbase-site.xml configuration will point to a 
> custom rpc controller factory which is a Phoenix-specific one to configure 
> the priorities for index and system catalog table. 
> However, sometimes these Phoenix-enabled clusters are used from pure-HBase 
> client applications resulting in ClassNotFoundExceptions in application code 
> or MapReduce jobs. Since hbase configuration is shared between 
> Phoenix-clients and HBase clients, having different configurations at the 
> client side is hard. 
> We can instead try to load up the RPCControllerFactory from conf, and if not 
> found, fallback to the default one (in case this is a pure HBase client). In 
> case Phoenix is already in the classpath, it will work as usual. 
> This does not affect the rpc scheduler factory since it is only used at the 
> server side. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14946) Don't allow multi's to over run the max result size.

2015-12-09 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14946:
--
Attachment: HBASE-14946-v9.patch

Even more checkstyles.

> Don't allow multi's to over run the max result size.
> 
>
> Key: HBASE-14946
> URL: https://issues.apache.org/jira/browse/HBASE-14946
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Attachments: HBASE-14946-v1.patch, HBASE-14946-v2.patch, 
> HBASE-14946-v3.patch, HBASE-14946-v5.patch, HBASE-14946-v6.patch, 
> HBASE-14946-v7.patch, HBASE-14946-v8.patch, HBASE-14946-v9.patch, 
> HBASE-14946.patch
>
>
> If a user puts a list of tons of different gets into a table we will then 
> send them along in a multi. The server un-wraps each get in the multi. While 
> no single get may be over the size limit the total might be.
> We should protect the server from this. 
> We should batch up on the server side so each RPC is smaller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14795) Enhance the spark-hbase scan operations

2015-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049742#comment-15049742
 ] 

Hadoop QA commented on HBASE-14795:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12776644/HBASE-14795-3.patch
  against master branch at commit 0e147a9d6e53e71ad2e57f512b4d3e1eeeac0b78.
  ATTACHMENT ID: 12776644

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:red}-1 javac{color}.  The applied patch generated 37 javac compiler 
warnings (more than the master's current 35 warnings).

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16817//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16817//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16817//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16817//console

This message is automatically generated.

> Enhance the spark-hbase scan operations
> ---
>
> Key: HBASE-14795
> URL: https://issues.apache.org/jira/browse/HBASE-14795
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Malaska
>Assignee: Zhan Zhang
>Priority: Minor
> Attachments: 
> 0001-HBASE-14795-Enhance-the-spark-hbase-scan-operations.patch, 
> HBASE-14795-1.patch, HBASE-14795-2.patch, HBASE-14795-3.patch
>
>
> This is a sub-jira of HBASE-14789.  This jira is to focus on the replacement 
> of TableInputFormat for a more custom scan implementation that will make the 
> following use case more effective.
> Use case:
> In the case you have multiple scan ranges on a single table with in a single 
> query.  TableInputFormat will scan the the outer range of the scan start and 
> end range where this implementation can be more pointed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14942) Allow turning off BoundedByteBufferPool

2015-12-09 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049624#comment-15049624
 ] 

Elliott Clark commented on HBASE-14942:
---

Yeah this seems to make a small difference. Going to commit.

> Allow turning off BoundedByteBufferPool
> ---
>
> Key: HBASE-14942
> URL: https://issues.apache.org/jira/browse/HBASE-14942
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14942.patch
>
>
> The G1 does a great job of compacting, there's no reason to use the 
> BoundedByteBufferPool when the JVM can it for us. So we should allow turning 
> this off for people who are running new jvm's where the G1 is working well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14953) HBaseInterClusterReplicationEndpoint: Do not retry the whole batch of edits in case of RejectedExecutionException

2015-12-09 Thread Ashu Pachauri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashu Pachauri updated HBASE-14953:
--
Priority: Critical  (was: Major)

> HBaseInterClusterReplicationEndpoint: Do not retry the whole batch of edits 
> in case of RejectedExecutionException
> -
>
> Key: HBASE-14953
> URL: https://issues.apache.org/jira/browse/HBASE-14953
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
>Priority: Critical
> Attachments: HBASE-14953-V1.patch
>
>
> When we have wal provider set to multiwal, the ReplicationSource has multiple 
> worker threads submitting batches to HBaseInterClusterReplicationEndpoint. In 
> such a scenario, it is quite common to encounter RejectedExecutionException 
> because it takes quite long for shipping edits to peer cluster compared to 
> reading edits from source and submitting more batches to the endpoint. 
> The logs are just filled with warnings due to this very exception.
> Since we subdivide batches before actually shipping them, we don't need to 
> fail and resend the whole batch if one of the sub-batches fails with 
> RejectedExecutionException. Rather, we should just retry the failed 
> sub-batches. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14953) HBaseInterClusterReplicationEndpoint: Do not retry the whole batch of edits in case of RejectedExecutionException

2015-12-09 Thread Ashu Pachauri (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049885#comment-15049885
 ] 

Ashu Pachauri commented on HBASE-14953:
---

To give more context on why we really need to do this:
1. This impacts replication performance, because we are retrying unnecessarily.
2. The more pressing concern is that this creates significant extra traffic to 
peer clusters, because a good chunk of edits already sent across the wire 
(which would potentially succeed) are resent. This translates into more load on 
both the clusters.
3. Clutters the logs with too many warning. The major portion of logs is filled 
with these exceptions.
These factors become more significant in a high traffic environment. 

> HBaseInterClusterReplicationEndpoint: Do not retry the whole batch of edits 
> in case of RejectedExecutionException
> -
>
> Key: HBASE-14953
> URL: https://issues.apache.org/jira/browse/HBASE-14953
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Attachments: HBASE-14953-V1.patch
>
>
> When we have wal provider set to multiwal, the ReplicationSource has multiple 
> worker threads submitting batches to HBaseInterClusterReplicationEndpoint. In 
> such a scenario, it is quite common to encounter RejectedExecutionException 
> because it takes quite long for shipping edits to peer cluster compared to 
> reading edits from source and submitting more batches to the endpoint. 
> The logs are just filled with warnings due to this very exception.
> Since we subdivide batches before actually shipping them, we don't need to 
> fail and resend the whole batch if one of the sub-batches fails with 
> RejectedExecutionException. Rather, we should just retry the failed 
> sub-batches. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >