[jira] [Commented] (HBASE-12133) Add FastLongHistogram for metric computation

2014-10-01 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154409#comment-14154409
 ] 

stack commented on HBASE-12133:
---

Thanks for contrib. Where would we use it do you think? How better than the 
histograms we currently use?  On the setMin and setMax, what do you think of 
the discussion on the end of this blog discussion an implementation that looks 
a little similar -- spinning until we can successfully CAS the value -- 
http://gridgain.blogspot.com/2011/06/even-better-atomicinteger-and.html  

 Add FastLongHistogram for metric computation
 

 Key: HBASE-12133
 URL: https://issues.apache.org/jira/browse/HBASE-12133
 Project: HBase
  Issue Type: New Feature
  Components: metrics
Affects Versions: 0.98.8
Reporter: Yi Deng
Assignee: Yi Deng
Priority: Minor
  Labels: histogram, metrics
 Fix For: 0.98.8

 Attachments: 
 0001-Add-FastLongHistogram-for-fast-histogram-estimation.patch


 FastLongHistogram is a thread-safe class that estimate distribution of data 
 and computes the quantiles. It's useful for computing aggregated metrics like 
 P99/P95.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12049) Help for alter command is a bit confusing

2014-10-01 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154411#comment-14154411
 ] 

stack commented on HBASE-12049:
---

A word of caution would not go amiss.  This feature has been buggy in the past 
and its my guess that it sees little use -- just saying.  Want to commit and 
add a note on commit [~misty]? Else I can.

 Help for alter command is a bit confusing
 -

 Key: HBASE-12049
 URL: https://issues.apache.org/jira/browse/HBASE-12049
 Project: HBase
  Issue Type: Improvement
  Components: documentation, shell
Affects Versions: 0.98.5
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Trivial
  Labels: shell
 Attachments: HBASE-12049-1.patch, HBASE-12049.patch


 The help message shown for alter command is a bit confusing.
 A part of current help message
 {code}
 Here is some help for this command:
 Alter a table. Depending on the HBase setting 
 (hbase.online.schema.update.enable),
 the table must be disabled or not to be altered (see help 'disable').
 You can add/modify/delete column families, as well as change table
 configuration. Column families work similarly to create; column family
 spec can either be a name string, or a dictionary with NAME attribute.
 Dictionaries are described on the main help command output.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12116) Hot contention spots; writing

2014-10-01 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154418#comment-14154418
 ] 

Anoop Sam John commented on HBASE-12116:


On the TRT patch
For the 1st time call on includeTimestamp () we need set the minimumTimestamp 
and maximumTimestamp to the give ts no?
If the includeTimestamp () call happens in a decreasing ts way you can see the 
min value will get changed all the time and the max will never change and be at 
-1 always !

 Hot contention spots; writing
 -

 Key: HBASE-12116
 URL: https://issues.apache.org/jira/browse/HBASE-12116
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Attachments: 12116.checkForReplicas.txt, 
 12116.stringify.and.cache.scanner.maxsize.txt, 12116.txt, Screen Shot 
 2014-09-29 at 5.12.51 PM.png, Screen Shot 2014-09-30 at 10.39.34 PM.png


 Playing with flight recorder, here are some write-time contentious 
 synchronizations/locks (picture coming)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12135) Website is broken

2014-10-01 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-12135:

Issue Type: Bug  (was: Sub-task)
Parent: (was: HBASE-12134)

 Website is broken
 -

 Key: HBASE-12135
 URL: https://issues.apache.org/jira/browse/HBASE-12135
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
Priority: Blocker
 Fix For: 2.0.0, 0.99.1


 Website is broken because of a flaw in the publish script (see HBASE-12134). 
 Currently reverting and the script has also already been fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12135) Website is broken

2014-10-01 Thread Misty Stanley-Jones (JIRA)
Misty Stanley-Jones created HBASE-12135:
---

 Summary: Website is broken
 Key: HBASE-12135
 URL: https://issues.apache.org/jira/browse/HBASE-12135
 Project: HBase
  Issue Type: Sub-task
  Components: documentation
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
Priority: Blocker


Website is broken because of a flaw in the publish script (see HBASE-12134). 
Currently reverting and the script has also already been fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12049) Help for alter command is a bit confusing

2014-10-01 Thread Misty Stanley-Jones (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154419#comment-14154419
 ] 

Misty Stanley-Jones commented on HBASE-12049:
-

I got it.

 Help for alter command is a bit confusing
 -

 Key: HBASE-12049
 URL: https://issues.apache.org/jira/browse/HBASE-12049
 Project: HBase
  Issue Type: Improvement
  Components: documentation, shell
Affects Versions: 0.98.5
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Trivial
  Labels: shell
 Attachments: HBASE-12049-1.patch, HBASE-12049.patch


 The help message shown for alter command is a bit confusing.
 A part of current help message
 {code}
 Here is some help for this command:
 Alter a table. Depending on the HBase setting 
 (hbase.online.schema.update.enable),
 the table must be disabled or not to be altered (see help 'disable').
 You can add/modify/delete column families, as well as change table
 configuration. Column families work similarly to create; column family
 spec can either be a name string, or a dictionary with NAME attribute.
 Dictionaries are described on the main help command output.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11153) http webUI's should redirect to https when enabled

2014-10-01 Thread Kiran Kumar M R (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154422#comment-14154422
 ] 

Kiran Kumar M R commented on HBASE-11153:
-

I am fine with closing this as 'Won't Fix'. May be we can explain in the 
documentation that user need to specify 'https' in URL when SSL is enabled for 
WebUI.

 http webUI's should redirect to https when enabled
 --

 Key: HBASE-11153
 URL: https://issues.apache.org/jira/browse/HBASE-11153
 Project: HBase
  Issue Type: Bug
  Components: master, regionserver, UI
Affects Versions: 0.98.0
Reporter: Nick Dimiduk
Assignee: Kiran Kumar M R
Priority: Minor
  Labels: beginner

 When configured to listen on https, we should redirect non-secure requests to 
 the appropriate port/protocol. Currently we respond with a 200 and no data, 
 which is perplexing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12126) Region server coprocessor endpoint

2014-10-01 Thread Virag Kothari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154429#comment-14154429
 ] 

Virag Kothari commented on HBASE-12126:
---

bq. Maybe there's a better name but it's not totally off because there will 
only be one service instance of this type per regionserver.

Makes sense to me. Will add SingletonCoprocessorService.

bq. Specifically for HBASE-12125... Do you need this, though? I'd be OK if you 
added a new RPC to region server, since this is internal functionality anyway.

Initially we thought of building a replication wal checker tool (Hbase-12125) 
separate from hbck. So we thought of having a coprocessor. But now we added 
that as an option to hbck as its easier to run in our internal setups. So even 
though not necessary, I think we can still go ahead with the pluggable 
coprocessor based solution for HBASE-12125 as it will be easier to separate out 
from hbck if needed.
I will remove the jira dependency and update the description as its confusing.



 Region server coprocessor endpoint
 --

 Key: HBASE-12126
 URL: https://issues.apache.org/jira/browse/HBASE-12126
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 0.98.6
Reporter: Virag Kothari
Assignee: Virag Kothari
 Attachments: HBASE-12126-0.98.patch


 Primary use case in HBASE-12125 but being able to make endpoint calls against 
 region server can be useful in general.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12049) Help for alter command is a bit confusing

2014-10-01 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-12049:

   Resolution: Fixed
Fix Version/s: 0.99.1
   2.0.0
   Status: Resolved  (was: Patch Available)

Committed to master and branch-1. Thanks [~ashish singhi]

 Help for alter command is a bit confusing
 -

 Key: HBASE-12049
 URL: https://issues.apache.org/jira/browse/HBASE-12049
 Project: HBase
  Issue Type: Improvement
  Components: documentation, shell
Affects Versions: 0.98.5
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Trivial
  Labels: shell
 Fix For: 2.0.0, 0.99.1

 Attachments: HBASE-12049-1.patch, HBASE-12049.patch


 The help message shown for alter command is a bit confusing.
 A part of current help message
 {code}
 Here is some help for this command:
 Alter a table. Depending on the HBase setting 
 (hbase.online.schema.update.enable),
 the table must be disabled or not to be altered (see help 'disable').
 You can add/modify/delete column families, as well as change table
 configuration. Column families work similarly to create; column family
 spec can either be a name string, or a dictionary with NAME attribute.
 Dictionaries are described on the main help command output.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12126) Region server coprocessor endpoint

2014-10-01 Thread Virag Kothari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virag Kothari updated HBASE-12126:
--
Description: Utility to make endpoint calls against region server  (was: 
Primary use case in HBASE-12125 but being able to make endpoint calls against 
region server can be useful in general.)

 Region server coprocessor endpoint
 --

 Key: HBASE-12126
 URL: https://issues.apache.org/jira/browse/HBASE-12126
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 0.98.6
Reporter: Virag Kothari
Assignee: Virag Kothari
 Attachments: HBASE-12126-0.98.patch


 Utility to make endpoint calls against region server



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12125) Add Hbck option to check and fix WAL's from replication queue

2014-10-01 Thread Virag Kothari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154441#comment-14154441
 ] 

Virag Kothari commented on HBASE-12125:
---

A WAL roll on region server would be required only if the current WAL (WAL 
being written to) is corrupted. So fixCorruptedReplicationWAL can be useful if 
we know that the current WAL being written to is ok.

 Add Hbck option to check and fix WAL's from replication queue
 -

 Key: HBASE-12125
 URL: https://issues.apache.org/jira/browse/HBASE-12125
 Project: HBase
  Issue Type: Bug
  Components: Replication
Reporter: Virag Kothari
Assignee: Virag Kothari

 The replication source will discard the WAL file in many cases when it 
 encounters an exception reading it . This can cause data loss
 and the underlying reason of failed read remains hidden.  Only in certain 
 scenarios, the replication source should dump the current WAL and move to the 
 next one. 
 This JIRA aims to have an hbck option to check the WAL files of replication 
 queues for any inconsistencies and also provide an option to fix it.
 The fix can be to remove the file from replication queue in zk and from the 
 memory of replication source manager and replication sources. 
 A region server endpoint call from the hbck client to region server can be 
 used to achieve this.
 Hbck can be configured with the following options:
 -softCheckReplicationWAL : Tries to open only the oldest WAL (the WAL 
 currently read by replication source) from replication queue. If there is a 
 position associated, it also seeks to that position and reads an entry from 
 there
 -hardCheckReplicationWAL:  Check all WAL paths from replication queues by 
 reading them completely to make sure they are ok.
 -fixMissingReplicationWAL: Remove the WAL's from replication queues which are 
 not present on hdfs
 -fixCorruptedReplicationWAL:  Remove the WAL's from replication queues which 
 are corrupted (based on the findings from softCheck/hardCheck). Also the 
 WAL's are moved to a quarantine dir
 -rollAndFixCorruptedReplicationWAL - If the current WAL is corrupted, it is 
 first rolled over and then deals with it in the same way as 
 -fixCorruptedReplicationWAL option



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12133) Add FastLongHistogram for metric computation

2014-10-01 Thread Yi Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154445#comment-14154445
 ] 

Yi Deng commented on HBASE-12133:
-

[~stack] Thanks for commenting.

We're going to use it to compute a P99/P95 metric every one minute. The whole 
design is to make a none-synchronized version of histogram collecting.

About the updateMin/updateMax, the implementation is same as what the blog 
shows and it is good for me. People there are mainly concerning about whether 
that should be part of JDK.

In practice, even if a lot of threads are calling the same method, all of them 
could finish without many retrying, because the retry only happened if
1) the value needs to be change (i.e. new value is smaller than the current 
minimum). Actually, most of the thread will not try to change the value at all.
2) some other thread change the current value(not just execute parralelly), 
AFTER the current value is taken out and BEFORE the new value is sucessfully 
compared and set. Since the code here is so fast that we can be optimistic that 
it only happens with a very small probabilty. (but we don't assume it never 
happen)
3) since the value is changing to a single direction (keep decreasing for 
updateMin), it is very likely in the next retry (because the value has been 
changed smaller), the updated value has been samller or equal to the value to 
be set to, and then quit the loop. 

 Add FastLongHistogram for metric computation
 

 Key: HBASE-12133
 URL: https://issues.apache.org/jira/browse/HBASE-12133
 Project: HBase
  Issue Type: New Feature
  Components: metrics
Affects Versions: 0.98.8
Reporter: Yi Deng
Assignee: Yi Deng
Priority: Minor
  Labels: histogram, metrics
 Fix For: 0.98.8

 Attachments: 
 0001-Add-FastLongHistogram-for-fast-histogram-estimation.patch


 FastLongHistogram is a thread-safe class that estimate distribution of data 
 and computes the quantiles. It's useful for computing aggregated metrics like 
 P99/P95.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11879) Change TableInputFormatBase to take interface arguments

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154450#comment-14154450
 ] 

Hudson commented on HBASE-11879:


FAILURE: Integrated in HBase-TRUNK #5596 (See 
[https://builds.apache.org/job/HBase-TRUNK/5596/])
HBASE-11879 Change TableInputFormatBase to take interface arguments (Solomon 
Duskis) (stack: rev ff31691c84587c2fc938705e184f2a3238ef3070)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReader.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/MetaScanner.java


 Change TableInputFormatBase to take interface arguments
 ---

 Key: HBASE-11879
 URL: https://issues.apache.org/jira/browse/HBASE-11879
 Project: HBase
  Issue Type: Improvement
Reporter: Carter
Assignee: Solomon Duskis
 Fix For: 2.0.0, 0.99.1

 Attachments: 11879v4.txt, HBASE-11879_v2.patch, HBASE-11879_v3.patch, 
 HBASE_11879.patch, HBASE_11879_v1.patch


 As part of the ongoing interface abstraction work, I'm now investigating 
 {{TableInputFormatBase}}, which has two methods that break encapsulation:
 {code}
 protected HTable getHTable();
 protected void setHTable(HTable table);
 {code}
 While these are protected methods, the base @InterfaceAudience.Public is 
 abstract, meaning that it supports extension by user code.
 I propose deprecating these two methods and replacing them with these four, 
 once the Table interface is merged:
 {code}
 protected Table getTable();
 protected void setTable(Table table);
 protected RegionLocator getRegionLocator();
 protected void setRegionLocator(RegionLocator regionLocator);
 {code}
 Since users will frequently call {{setTable}} and {{setRegionLocator}} 
 together, it probably also makes sense to add the following convenience 
 method:
 {code}
 protected void setTableAndRegionLocator(Table table, RegionLocator 
 regionLocator);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12133) Add FastLongHistogram for metric computation

2014-10-01 Thread Yi Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154457#comment-14154457
 ] 

Yi Deng commented on HBASE-12133:
-

About the results by QA.

How can I find which core test was failing? in my local machine, all tests 
under hadoop-common were passed. My class is not used by other code yet.

[~manukranthk] how do you find the java warnings?

mvn findbugs:findbugs -X shows two warnings are not realted to my code.

 Add FastLongHistogram for metric computation
 

 Key: HBASE-12133
 URL: https://issues.apache.org/jira/browse/HBASE-12133
 Project: HBase
  Issue Type: New Feature
  Components: metrics
Affects Versions: 0.98.8
Reporter: Yi Deng
Assignee: Yi Deng
Priority: Minor
  Labels: histogram, metrics
 Fix For: 0.98.8

 Attachments: 
 0001-Add-FastLongHistogram-for-fast-histogram-estimation.patch


 FastLongHistogram is a thread-safe class that estimate distribution of data 
 and computes the quantiles. It's useful for computing aggregated metrics like 
 P99/P95.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12133) Add FastLongHistogram for metric computation

2014-10-01 Thread Manukranth Kolloju (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154458#comment-14154458
 ] 

Manukranth Kolloju commented on HBASE-12133:


[~stack], we have no histograms in our internal hadoop, hence trying to add an 
implementation into the code. I also noticed that there are instances of yammer 
histogram being used. Were you talking about that?

 Add FastLongHistogram for metric computation
 

 Key: HBASE-12133
 URL: https://issues.apache.org/jira/browse/HBASE-12133
 Project: HBase
  Issue Type: New Feature
  Components: metrics
Affects Versions: 0.98.8
Reporter: Yi Deng
Assignee: Yi Deng
Priority: Minor
  Labels: histogram, metrics
 Fix For: 0.98.8

 Attachments: 
 0001-Add-FastLongHistogram-for-fast-histogram-estimation.patch


 FastLongHistogram is a thread-safe class that estimate distribution of data 
 and computes the quantiles. It's useful for computing aggregated metrics like 
 P99/P95.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12133) Add FastLongHistogram for metric computation

2014-10-01 Thread Manukranth Kolloju (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154461#comment-14154461
 ] 

Manukranth Kolloju commented on HBASE-12133:


[~daviddengcn], you can follow [~stack]'s comment on HBASE-12075

 Add FastLongHistogram for metric computation
 

 Key: HBASE-12133
 URL: https://issues.apache.org/jira/browse/HBASE-12133
 Project: HBase
  Issue Type: New Feature
  Components: metrics
Affects Versions: 0.98.8
Reporter: Yi Deng
Assignee: Yi Deng
Priority: Minor
  Labels: histogram, metrics
 Fix For: 0.98.8

 Attachments: 
 0001-Add-FastLongHistogram-for-fast-histogram-estimation.patch


 FastLongHistogram is a thread-safe class that estimate distribution of data 
 and computes the quantiles. It's useful for computing aggregated metrics like 
 P99/P95.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12130) HBASE-11980 calls hflush and hsync doing near double the syncing work

2014-10-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154462#comment-14154462
 ] 

Hadoop QA commented on HBASE-12130:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12672252/12130.txt
  against trunk revision .
  ATTACHMENT ID: 12672252

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestMultiParallel
  org.apache.hadoop.hbase.mapreduce.TestImportExport
  org.apache.hadoop.hbase.master.TestDistributedLogSplitting

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.util.TestBytes.testToStringBytesBinaryReversible(TestBytes.java:296)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11160//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11160//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11160//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11160//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11160//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11160//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11160//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11160//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11160//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11160//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11160//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11160//console

This message is automatically generated.

 HBASE-11980 calls hflush and hsync doing near double the syncing work
 -

 Key: HBASE-12130
 URL: https://issues.apache.org/jira/browse/HBASE-12130
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 0.99.1
Reporter: stack
Assignee: stack
Priority: Critical
 Attachments: 12130.txt, Screen Shot 2014-09-30 at 9.17.09 PM.png


 The HBASE-11980 change has us doing hflush and hsync every time we call sync 
 (noticed profiling).  Fix.  Let me expose as config calling one or the other. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12049) Help for alter command is a bit confusing

2014-10-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154464#comment-14154464
 ] 

Hadoop QA commented on HBASE-12049:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12672256/HBASE-12049-1.patch
  against trunk revision .
  ATTACHMENT ID: 12672256

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 5 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11162//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11162//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11162//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11162//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11162//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11162//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11162//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11162//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11162//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11162//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11162//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11162//console

This message is automatically generated.

 Help for alter command is a bit confusing
 -

 Key: HBASE-12049
 URL: https://issues.apache.org/jira/browse/HBASE-12049
 Project: HBase
  Issue Type: Improvement
  Components: documentation, shell
Affects Versions: 0.98.5
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Trivial
  Labels: shell
 Fix For: 2.0.0, 0.99.1

 Attachments: HBASE-12049-1.patch, HBASE-12049.patch


 The help message shown for alter command is a bit confusing.
 A part of current help message
 {code}
 Here is some help for this command:
 Alter a table. Depending on the HBase setting 
 (hbase.online.schema.update.enable),
 the table must be disabled or not to be altered (see help 'disable').
 You can add/modify/delete column families, as well as change table
 configuration. Column families work similarly to create; column family
 spec can either be a name string, or a dictionary with NAME attribute.
 Dictionaries are described on the main help command output.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12136) Race condition between client adding tableCF replication znode and server triggering TableCFsTracker

2014-10-01 Thread Virag Kothari (JIRA)
Virag Kothari created HBASE-12136:
-

 Summary: Race condition between client adding tableCF replication 
znode and  server triggering TableCFsTracker
 Key: HBASE-12136
 URL: https://issues.apache.org/jira/browse/HBASE-12136
 Project: HBase
  Issue Type: Bug
Reporter: Virag Kothari
Assignee: Virag Kothari


In ReplicationPeersZKImpl.addPeer(), there is a race between client creating 
tableCf znode and the server triggering  TableCFsTracker. If the server wins, 
it wont be able to read the data set on tableCF znode and  replication will be 
misconfigured 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12137) Alter table add cf doesn't do compression test

2014-10-01 Thread Virag Kothari (JIRA)
Virag Kothari created HBASE-12137:
-

 Summary: Alter table add cf doesn't do compression test
 Key: HBASE-12137
 URL: https://issues.apache.org/jira/browse/HBASE-12137
 Project: HBase
  Issue Type: Bug
Reporter: Virag Kothari
Assignee: Virag Kothari






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12138) preMasterInitialization() cp hook at wrong place

2014-10-01 Thread Virag Kothari (JIRA)
Virag Kothari created HBASE-12138:
-

 Summary: preMasterInitialization() cp hook at wrong place
 Key: HBASE-12138
 URL: https://issues.apache.org/jira/browse/HBASE-12138
 Project: HBase
  Issue Type: Bug
Reporter: Virag Kothari
Assignee: Virag Kothari






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12136) Race condition between client adding tableCF replication znode and server triggering TableCFsTracker

2014-10-01 Thread Virag Kothari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virag Kothari updated HBASE-12136:
--
Attachment: HBASE-12136.patch

 Race condition between client adding tableCF replication znode and  server 
 triggering TableCFsTracker
 -

 Key: HBASE-12136
 URL: https://issues.apache.org/jira/browse/HBASE-12136
 Project: HBase
  Issue Type: Bug
Reporter: Virag Kothari
Assignee: Virag Kothari
 Attachments: HBASE-12136.patch


 In ReplicationPeersZKImpl.addPeer(), there is a race between client creating 
 tableCf znode and the server triggering  TableCFsTracker. If the server wins, 
 it wont be able to read the data set on tableCF znode and  replication will 
 be misconfigured 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12137) Alter table add cf doesn't do compression test

2014-10-01 Thread Virag Kothari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virag Kothari updated HBASE-12137:
--
Attachment: HBASE-12137.patch

 Alter table add cf doesn't do compression test
 --

 Key: HBASE-12137
 URL: https://issues.apache.org/jira/browse/HBASE-12137
 Project: HBase
  Issue Type: Bug
Reporter: Virag Kothari
Assignee: Virag Kothari
 Attachments: HBASE-12137.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12138) preMasterInitialization() cp hook at wrong place

2014-10-01 Thread Virag Kothari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virag Kothari updated HBASE-12138:
--
Attachment: HBASE-12138.patch

 preMasterInitialization() cp hook at wrong place
 

 Key: HBASE-12138
 URL: https://issues.apache.org/jira/browse/HBASE-12138
 Project: HBase
  Issue Type: Bug
Reporter: Virag Kothari
Assignee: Virag Kothari
 Attachments: HBASE-12138.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12138) preMasterInitialization() cp hook at wrong place

2014-10-01 Thread Virag Kothari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virag Kothari updated HBASE-12138:
--
Component/s: master

 preMasterInitialization() cp hook at wrong place
 

 Key: HBASE-12138
 URL: https://issues.apache.org/jira/browse/HBASE-12138
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.98.6
Reporter: Virag Kothari
Assignee: Virag Kothari
 Attachments: HBASE-12138.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12138) preMasterInitialization() cp hook at wrong place

2014-10-01 Thread Virag Kothari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virag Kothari updated HBASE-12138:
--
Affects Version/s: 0.98.6

 preMasterInitialization() cp hook at wrong place
 

 Key: HBASE-12138
 URL: https://issues.apache.org/jira/browse/HBASE-12138
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.98.6
Reporter: Virag Kothari
Assignee: Virag Kothari
 Attachments: HBASE-12138.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12138) preMasterInitialization() cp hook at wrong place

2014-10-01 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154481#comment-14154481
 ] 

Anoop Sam John commented on HBASE-12138:


{code}
/**
   * Call before the master initialization is set to true.
   * {@link org.apache.hadoop.hbase.master.HMaster} process.
   */
  void preMasterInitialization(final 
ObserverContextMasterCoprocessorEnvironment ctx)
  throws IOException;
{code}
To be called just before this finalization of init. Not before all the steps in 
the init.

 preMasterInitialization() cp hook at wrong place
 

 Key: HBASE-12138
 URL: https://issues.apache.org/jira/browse/HBASE-12138
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.98.6
Reporter: Virag Kothari
Assignee: Virag Kothari
 Attachments: HBASE-12138.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12136) Race condition between client adding tableCF replication znode and server triggering TableCFsTracker

2014-10-01 Thread Virag Kothari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154490#comment-14154490
 ] 

Virag Kothari commented on HBASE-12136:
---

If useMulti is set to true, then the operations of adding a peer znode and its 
children such as peer -state and tableCf  will be atomic making the server 
change redundant. However if the useMulti is not configured, then we need to 
have a nodeCreated() event on the tracker so that even on race condition, the 
nodeCreated() will make sure to refresh the tableCf config.

The TestPerTableCFReplication.testPerTableCFReplication() should no longer be 
flakey.

 Race condition between client adding tableCF replication znode and  server 
 triggering TableCFsTracker
 -

 Key: HBASE-12136
 URL: https://issues.apache.org/jira/browse/HBASE-12136
 Project: HBase
  Issue Type: Bug
Reporter: Virag Kothari
Assignee: Virag Kothari
 Attachments: HBASE-12136.patch


 In ReplicationPeersZKImpl.addPeer(), there is a race between client creating 
 tableCf znode and the server triggering  TableCFsTracker. If the server wins, 
 it wont be able to read the data set on tableCF znode and  replication will 
 be misconfigured 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12139) StochasticLoadBalancer doesn't work on large lightly loaded clusters

2014-10-01 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-12139:
-

 Summary: StochasticLoadBalancer doesn't work on large lightly 
loaded clusters
 Key: HBASE-12139
 URL: https://issues.apache.org/jira/browse/HBASE-12139
 Project: HBase
  Issue Type: Bug
  Components: Balancer, master
Affects Versions: 0.98.6.1, 0.99.0, 2.0.0
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Critical
 Fix For: 2.0.0, 0.98.7, 0.99.1


If you have a cluster with  200 nodes and a small table ( 50 regions) the 
StochasticLoadBalancer doesn't work at all. It is not correctly scaling the 
required values.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12138) preMasterInitialization() cp hook at wrong place

2014-10-01 Thread Virag Kothari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154500#comment-14154500
 ] 

Virag Kothari commented on HBASE-12138:
---

Hmm. I thought it was a bug. I wanted to use some hook before master starts 
doing any heavy activity like assignment. May be I need to create something 
like preStartMaster()
Thanks for the quick look. Marking as invalid.

 preMasterInitialization() cp hook at wrong place
 

 Key: HBASE-12138
 URL: https://issues.apache.org/jira/browse/HBASE-12138
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.98.6
Reporter: Virag Kothari
Assignee: Virag Kothari
 Attachments: HBASE-12138.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-12138) preMasterInitialization() cp hook at wrong place

2014-10-01 Thread Virag Kothari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virag Kothari resolved HBASE-12138.
---
Resolution: Invalid
  Assignee: (was: Virag Kothari)

 preMasterInitialization() cp hook at wrong place
 

 Key: HBASE-12138
 URL: https://issues.apache.org/jira/browse/HBASE-12138
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.98.6
Reporter: Virag Kothari
 Attachments: HBASE-12138.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12137) Alter table add cf doesn't do compression test

2014-10-01 Thread Virag Kothari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virag Kothari updated HBASE-12137:
--
Component/s: master

 Alter table add cf doesn't do compression test
 --

 Key: HBASE-12137
 URL: https://issues.apache.org/jira/browse/HBASE-12137
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.98.6
Reporter: Virag Kothari
Assignee: Virag Kothari
 Attachments: HBASE-12137.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12137) Alter table add cf doesn't do compression test

2014-10-01 Thread Virag Kothari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virag Kothari updated HBASE-12137:
--
Affects Version/s: 0.98.6

 Alter table add cf doesn't do compression test
 --

 Key: HBASE-12137
 URL: https://issues.apache.org/jira/browse/HBASE-12137
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.98.6
Reporter: Virag Kothari
Assignee: Virag Kothari
 Attachments: HBASE-12137.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12139) StochasticLoadBalancer doesn't work on large lightly loaded clusters

2014-10-01 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154507#comment-14154507
 ] 

Elliott Clark commented on HBASE-12139:
---

https://reviews.facebook.net/D24285

 StochasticLoadBalancer doesn't work on large lightly loaded clusters
 

 Key: HBASE-12139
 URL: https://issues.apache.org/jira/browse/HBASE-12139
 Project: HBase
  Issue Type: Bug
  Components: Balancer, master
Affects Versions: 0.99.0, 2.0.0, 0.98.6.1
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Critical
 Fix For: 2.0.0, 0.98.7, 0.99.1


 If you have a cluster with  200 nodes and a small table ( 50 regions) the 
 StochasticLoadBalancer doesn't work at all. It is not correctly scaling the 
 required values.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12137) Alter table add cf doesn't do compression test

2014-10-01 Thread Virag Kothari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virag Kothari updated HBASE-12137:
--
Status: Patch Available  (was: Open)

 Alter table add cf doesn't do compression test
 --

 Key: HBASE-12137
 URL: https://issues.apache.org/jira/browse/HBASE-12137
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.98.6
Reporter: Virag Kothari
Assignee: Virag Kothari
 Attachments: HBASE-12137.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12136) Race condition between client adding tableCF replication znode and server triggering TableCFsTracker

2014-10-01 Thread Virag Kothari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virag Kothari updated HBASE-12136:
--
Affects Version/s: 0.98.6
   Status: Patch Available  (was: Open)

 Race condition between client adding tableCF replication znode and  server 
 triggering TableCFsTracker
 -

 Key: HBASE-12136
 URL: https://issues.apache.org/jira/browse/HBASE-12136
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.98.6
Reporter: Virag Kothari
Assignee: Virag Kothari
 Attachments: HBASE-12136.patch


 In ReplicationPeersZKImpl.addPeer(), there is a race between client creating 
 tableCf znode and the server triggering  TableCFsTracker. If the server wins, 
 it wont be able to read the data set on tableCF znode and  replication will 
 be misconfigured 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12136) Race condition between client adding tableCF replication znode and server triggering TableCFsTracker

2014-10-01 Thread Virag Kothari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virag Kothari updated HBASE-12136:
--
Component/s: Replication

 Race condition between client adding tableCF replication znode and  server 
 triggering TableCFsTracker
 -

 Key: HBASE-12136
 URL: https://issues.apache.org/jira/browse/HBASE-12136
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.98.6
Reporter: Virag Kothari
Assignee: Virag Kothari
 Attachments: HBASE-12136.patch


 In ReplicationPeersZKImpl.addPeer(), there is a race between client creating 
 tableCf znode and the server triggering  TableCFsTracker. If the server wins, 
 it wont be able to read the data set on tableCF znode and  replication will 
 be misconfigured 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12134) publish_website.sh script is too optimistic

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154529#comment-14154529
 ] 

Hudson commented on HBASE-12134:


FAILURE: Integrated in HBase-TRUNK #5597 (See 
[https://builds.apache.org/job/HBase-TRUNK/5597/])
HBASE-12134 publish_hbase_website.sh script can delete the website accidentally 
(mstanleyjones: rev 456e9fa7a71515f447455ef89143295bae26ee52)
* dev-support/publish_hbase_website.sh


 publish_website.sh script is too optimistic
 ---

 Key: HBASE-12134
 URL: https://issues.apache.org/jira/browse/HBASE-12134
 Project: HBase
  Issue Type: Bug
  Components: scripts
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
 Fix For: 2.0.0, 0.99.1

 Attachments: HBASE-12134-v1.patch, HBASE-12134.patch


 The script doesn't check for the status of the website build commands. This 
 means it will happily blow away the whole website assuming a failed mvn site 
 command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12049) Help for alter command is a bit confusing

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154530#comment-14154530
 ] 

Hudson commented on HBASE-12049:


FAILURE: Integrated in HBase-TRUNK #5597 (See 
[https://builds.apache.org/job/HBase-TRUNK/5597/])
HBASE-12049 Help for alter command is a bit confusing (Ashish Singhi) 
(mstanleyjones: rev 231bc987611b4e2d9d7aa6e86a06891934b0e5b2)
* hbase-shell/src/main/ruby/shell/commands/alter.rb


 Help for alter command is a bit confusing
 -

 Key: HBASE-12049
 URL: https://issues.apache.org/jira/browse/HBASE-12049
 Project: HBase
  Issue Type: Improvement
  Components: documentation, shell
Affects Versions: 0.98.5
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Trivial
  Labels: shell
 Fix For: 2.0.0, 0.99.1

 Attachments: HBASE-12049-1.patch, HBASE-12049.patch


 The help message shown for alter command is a bit confusing.
 A part of current help message
 {code}
 Here is some help for this command:
 Alter a table. Depending on the HBase setting 
 (hbase.online.schema.update.enable),
 the table must be disabled or not to be altered (see help 'disable').
 You can add/modify/delete column families, as well as change table
 configuration. Column families work similarly to create; column family
 spec can either be a name string, or a dictionary with NAME attribute.
 Dictionaries are described on the main help command output.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12139) StochasticLoadBalancer doesn't work on large lightly loaded clusters

2014-10-01 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-12139:
--
Attachment: 
0001-HBASE-12139-StochasticLoadBalancer-doesn-t-work-on-l-0.98.patch

Here's the 0.98 patch.

 StochasticLoadBalancer doesn't work on large lightly loaded clusters
 

 Key: HBASE-12139
 URL: https://issues.apache.org/jira/browse/HBASE-12139
 Project: HBase
  Issue Type: Bug
  Components: Balancer, master
Affects Versions: 0.99.0, 2.0.0, 0.98.6.1
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Critical
 Fix For: 2.0.0, 0.98.7, 0.99.1

 Attachments: 
 0001-HBASE-12139-StochasticLoadBalancer-doesn-t-work-on-l-0.98.patch, 
 0001-HBASE-12139-StochasticLoadBalancer-doesn-t-work-on-l.patch


 If you have a cluster with  200 nodes and a small table ( 50 regions) the 
 StochasticLoadBalancer doesn't work at all. It is not correctly scaling the 
 required values.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12135) Website is broken

2014-10-01 Thread Misty Stanley-Jones (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154531#comment-14154531
 ] 

Misty Stanley-Jones commented on HBASE-12135:
-

Reversion complete. Website is back. 

 Website is broken
 -

 Key: HBASE-12135
 URL: https://issues.apache.org/jira/browse/HBASE-12135
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
Priority: Blocker
 Fix For: 2.0.0, 0.99.1


 Website is broken because of a flaw in the publish script (see HBASE-12134). 
 Currently reverting and the script has also already been fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-12135) Website is broken

2014-10-01 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones resolved HBASE-12135.
-
Resolution: Fixed

 Website is broken
 -

 Key: HBASE-12135
 URL: https://issues.apache.org/jira/browse/HBASE-12135
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
Priority: Blocker
 Fix For: 2.0.0, 0.99.1


 Website is broken because of a flaw in the publish script (see HBASE-12134). 
 Currently reverting and the script has also already been fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12139) StochasticLoadBalancer doesn't work on large lightly loaded clusters

2014-10-01 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-12139:
--
Attachment: 0001-HBASE-12139-StochasticLoadBalancer-doesn-t-work-on-l.patch

Here's the master/branch one patch. Master had some of the fix but not all of 
it.

 StochasticLoadBalancer doesn't work on large lightly loaded clusters
 

 Key: HBASE-12139
 URL: https://issues.apache.org/jira/browse/HBASE-12139
 Project: HBase
  Issue Type: Bug
  Components: Balancer, master
Affects Versions: 0.99.0, 2.0.0, 0.98.6.1
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Critical
 Fix For: 2.0.0, 0.98.7, 0.99.1

 Attachments: 
 0001-HBASE-12139-StochasticLoadBalancer-doesn-t-work-on-l-0.98.patch, 
 0001-HBASE-12139-StochasticLoadBalancer-doesn-t-work-on-l.patch


 If you have a cluster with  200 nodes and a small table ( 50 regions) the 
 StochasticLoadBalancer doesn't work at all. It is not correctly scaling the 
 required values.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12139) StochasticLoadBalancer doesn't work on large lightly loaded clusters

2014-10-01 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-12139:
--
Status: Patch Available  (was: Open)

 StochasticLoadBalancer doesn't work on large lightly loaded clusters
 

 Key: HBASE-12139
 URL: https://issues.apache.org/jira/browse/HBASE-12139
 Project: HBase
  Issue Type: Bug
  Components: Balancer, master
Affects Versions: 0.98.6.1, 0.99.0, 2.0.0
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Critical
 Fix For: 2.0.0, 0.98.7, 0.99.1

 Attachments: 
 0001-HBASE-12139-StochasticLoadBalancer-doesn-t-work-on-l-0.98.patch, 
 0001-HBASE-12139-StochasticLoadBalancer-doesn-t-work-on-l.patch


 If you have a cluster with  200 nodes and a small table ( 50 regions) the 
 StochasticLoadBalancer doesn't work at all. It is not correctly scaling the 
 required values.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12124) Closed region could stay closed if master stops at bad time

2014-10-01 Thread Virag Kothari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154537#comment-14154537
 ] 

Virag Kothari commented on HBASE-12124:
---

Thanks [~jxiang]

 Closed region could stay closed if master stops at bad time
 ---

 Key: HBASE-12124
 URL: https://issues.apache.org/jira/browse/HBASE-12124
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.99.1

 Attachments: hbase-12124.patch


 This applies to RPC-based region assignment only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12112) Avoid KeyValueUtil#ensureKeyValue some more simple cases

2014-10-01 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154563#comment-14154563
 ] 

Anoop Sam John commented on HBASE-12112:


TestAsyncProcess passing locally

1 javadoc warn is related to this patch which I will commit on fix.
The findbugs warns seems not related
Will commit to 0.99+

 Avoid KeyValueUtil#ensureKeyValue some more simple cases
 

 Key: HBASE-12112
 URL: https://issues.apache.org/jira/browse/HBASE-12112
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 0.99.1

 Attachments: HBASE-12112.patch, HBASE-12112_V2.patch, 
 HBASE-12112_V2.patch, HBASE-12112_V4.patch, HBASE-12112_V4.patch


 This include fixes with
 - Replace KeyValue#heapSize() with CellUtil#estimatedHeapSizeOf(Cell)
 - Printing the key portion of a cell (rk+cf+q+ts+type). These are in 
 Exception messages
 - HFilePrettyPrinter - Avoiding ensureKeyValue() calls and calls to 
 cell#getxxx() which involves bytes copying. This is not a hot area still we 
 can avoid as much usage of deprecated methods as possible in core code. I 
 believe these bytes copying methods are used in many other parts and later we 
 can try fixing those as per area importance
 - Creating CellUtil#createKeyOnlyCell and using that in KeyOnlyFilter



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-12112) Avoid KeyValueUtil#ensureKeyValue some more simple cases

2014-10-01 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154563#comment-14154563
 ] 

Anoop Sam John edited comment on HBASE-12112 at 10/1/14 8:33 AM:
-

TestAsyncProcess passing locally

1 javadoc warn is related to this patch which I will fix on commit.
The findbugs warns seems not related
Will commit to 0.99+


was (Author: anoop.hbase):
TestAsyncProcess passing locally

1 javadoc warn is related to this patch which I will commit on fix.
The findbugs warns seems not related
Will commit to 0.99+

 Avoid KeyValueUtil#ensureKeyValue some more simple cases
 

 Key: HBASE-12112
 URL: https://issues.apache.org/jira/browse/HBASE-12112
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 0.99.1

 Attachments: HBASE-12112.patch, HBASE-12112_V2.patch, 
 HBASE-12112_V2.patch, HBASE-12112_V4.patch, HBASE-12112_V4.patch


 This include fixes with
 - Replace KeyValue#heapSize() with CellUtil#estimatedHeapSizeOf(Cell)
 - Printing the key portion of a cell (rk+cf+q+ts+type). These are in 
 Exception messages
 - HFilePrettyPrinter - Avoiding ensureKeyValue() calls and calls to 
 cell#getxxx() which involves bytes copying. This is not a hot area still we 
 can avoid as much usage of deprecated methods as possible in core code. I 
 believe these bytes copying methods are used in many other parts and later we 
 can try fixing those as per area importance
 - Creating CellUtil#createKeyOnlyCell and using that in KeyOnlyFilter



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12123) Failed assertion in BucketCache after 11331

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154574#comment-14154574
 ] 

Hudson commented on HBASE-12123:


FAILURE: Integrated in HBase-1.0 #252 (See 
[https://builds.apache.org/job/HBase-1.0/252/])
HBASE-12123 Failed assertion in BucketCache after 11331 (ndimiduk: rev 
b30b34abe728099e4f95a5df810a63898bd71aec)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java


 Failed assertion in BucketCache after 11331
 ---

 Key: HBASE-12123
 URL: https://issues.apache.org/jira/browse/HBASE-12123
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Enis Soztutar
Assignee: Nick Dimiduk
 Fix For: 2.0.0, 0.98.7, 0.99.1

 Attachments: 12123.patch


 As reported by [~enis]
 We have seen this in one of the test runs: 
 {code}
 2014-09-26 05:31:19,788 WARN  [main-BucketCacheWriter-2] bucket.BucketCache: 
 Failed doing drain
 java.lang.AssertionError
   at 
 org.apache.hadoop.hbase.io.hfile.bucket.BucketCache$RAMQueueEntry.writeToCache(BucketCache.java:1239)
   at 
 org.apache.hadoop.hbase.io.hfile.bucket.BucketCache$WriterThread.doDrain(BucketCache.java:773)
   at 
 org.apache.hadoop.hbase.io.hfile.bucket.BucketCache$WriterThread.run(BucketCache.java:731)
   at java.lang.Thread.run(Thread.java:745)
 2014-09-26 05:31:19,925 INFO  [main-BucketCacheWriter-2] bucket.BucketCache: 
 main-BucketCacheWriter-2 exiting, cacheEnabled=true
 2014-09-26 05:31:19,838 WARN  [main-BucketCacheWriter-1] bucket.BucketCache: 
 Failed doing drain
 java.lang.AssertionError
   at 
 org.apache.hadoop.hbase.io.hfile.bucket.BucketCache$RAMQueueEntry.writeToCache(BucketCache.java:1239)
   at 
 org.apache.hadoop.hbase.io.hfile.bucket.BucketCache$WriterThread.doDrain(BucketCache.java:773)
   at 
 org.apache.hadoop.hbase.io.hfile.bucket.BucketCache$WriterThread.run(BucketCache.java:731)
   at java.lang.Thread.run(Thread.java:745)
 2014-09-26 05:31:19,791 WARN  [main-BucketCacheWriter-0] bucket.BucketCache: 
 Failed doing drain
 java.lang.AssertionError
   at 
 org.apache.hadoop.hbase.io.hfile.bucket.BucketCache$RAMQueueEntry.writeToCache(BucketCache.java:1239)
   at 
 org.apache.hadoop.hbase.io.hfile.bucket.BucketCache$WriterThread.doDrain(BucketCache.java:773)
   at 
 org.apache.hadoop.hbase.io.hfile.bucket.BucketCache$WriterThread.run(BucketCache.java:731)
   at java.lang.Thread.run(Thread.java:745)
 2014-09-26 05:31:19,926 INFO  [main-BucketCacheWriter-0] bucket.BucketCache: 
 main-BucketCacheWriter-0 exiting, cacheEnabled=true
 2014-09-26 05:31:19,926 INFO  [main-BucketCacheWriter-1] bucket.BucketCache: 
 main-BucketCacheWriter-1 exiting, cacheEnabled=true
 {code}
 We are still running with assertions on in tests, and this block is failing 
 the assertion. Seems important: 
 {code}
 if (data instanceof HFileBlock) {
   ByteBuffer sliceBuf = ((HFileBlock) 
 data).getBufferReadOnlyWithHeader();
   sliceBuf.rewind();
   assert len == sliceBuf.limit() + 
 HFileBlock.EXTRA_SERIALIZATION_SPACE;
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12038) Replace internal uses of signatures with byte[] and String tableNames to use the TableName equivalents.

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154575#comment-14154575
 ] 

Hudson commented on HBASE-12038:


FAILURE: Integrated in HBase-1.0 #252 (See 
[https://builds.apache.org/job/HBase-1.0/252/])
HBASE-12038 Replace internal uses of signatures with byte[] and String 
tableNames to use the TableName equivalents (Solomon Duskis) (stack: rev 
eb361fc33d333b257c50ddbbcf4de8656bebadd0)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableOutputFormat.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestSecureLoadIncrementalHFiles.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterWrapper.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestProcessBasedCluster.java
* 
hbase-examples/src/test/java/org/apache/hadoop/hbase/coprocessor/example/TestRowCountEndpoint.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTimeRangeMapRed.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/TestAcidGuarantees.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestJoinedScanners.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestSecureLoadIncrementalHFilesSplitRecovery.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/tool/Canary.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityClient.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestCellACLWithMultipleVersions.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestScanEarlyTermination.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java
* 
hbase-examples/src/test/java/org/apache/hadoop/hbase/coprocessor/example/TestBulkDeleteProtocol.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRestartCluster.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithACL.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestRegionSplitter.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMultiParallel.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionSplitter.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterOperationsForRegionReplicas.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterWithScanLimits.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/TestMultiVersions.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestMultiSlaveReplication.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationBase.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestFSErrorsExposed.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRegionPlacement.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHTablePool.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/HRegionPartitioner.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestEncryptionKeyRotation.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduceBase.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSmallTests.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestClassLoading.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableInputFormat.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSyncUpTool.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestMasterReplication.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestPerTableCFReplication.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTableFactory.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlClient.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestSecureExportSnapshot.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/SecureTestUtil.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithDefaultVisLabelService.java
* 

[jira] [Updated] (HBASE-12112) Avoid KeyValueUtil#ensureKeyValue some more simple cases

2014-10-01 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-12112:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the review Stack

 Avoid KeyValueUtil#ensureKeyValue some more simple cases
 

 Key: HBASE-12112
 URL: https://issues.apache.org/jira/browse/HBASE-12112
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 0.99.1

 Attachments: HBASE-12112.patch, HBASE-12112_V2.patch, 
 HBASE-12112_V2.patch, HBASE-12112_V4.patch, HBASE-12112_V4.patch


 This include fixes with
 - Replace KeyValue#heapSize() with CellUtil#estimatedHeapSizeOf(Cell)
 - Printing the key portion of a cell (rk+cf+q+ts+type). These are in 
 Exception messages
 - HFilePrettyPrinter - Avoiding ensureKeyValue() calls and calls to 
 cell#getxxx() which involves bytes copying. This is not a hot area still we 
 can avoid as much usage of deprecated methods as possible in core code. I 
 believe these bytes copying methods are used in many other parts and later we 
 can try fixing those as per area importance
 - Creating CellUtil#createKeyOnlyCell and using that in KeyOnlyFilter



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12049) Help for alter command is a bit confusing

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154594#comment-14154594
 ] 

Hudson commented on HBASE-12049:


FAILURE: Integrated in HBase-1.0 #253 (See 
[https://builds.apache.org/job/HBase-1.0/253/])
HBASE-12049 Help for alter command is a bit confusing (Ashish Singhi) 
(mstanleyjones: rev 50a3019255a6b019ae864b549d047f819cec0ef9)
* hbase-shell/src/main/ruby/shell/commands/alter.rb


 Help for alter command is a bit confusing
 -

 Key: HBASE-12049
 URL: https://issues.apache.org/jira/browse/HBASE-12049
 Project: HBase
  Issue Type: Improvement
  Components: documentation, shell
Affects Versions: 0.98.5
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Trivial
  Labels: shell
 Fix For: 2.0.0, 0.99.1

 Attachments: HBASE-12049-1.patch, HBASE-12049.patch


 The help message shown for alter command is a bit confusing.
 A part of current help message
 {code}
 Here is some help for this command:
 Alter a table. Depending on the HBase setting 
 (hbase.online.schema.update.enable),
 the table must be disabled or not to be altered (see help 'disable').
 You can add/modify/delete column families, as well as change table
 configuration. Column families work similarly to create; column family
 spec can either be a name string, or a dictionary with NAME attribute.
 Dictionaries are described on the main help command output.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11879) Change TableInputFormatBase to take interface arguments

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154591#comment-14154591
 ] 

Hudson commented on HBASE-11879:


FAILURE: Integrated in HBase-1.0 #253 (See 
[https://builds.apache.org/job/HBase-1.0/253/])
HBASE-11879 Change TableInputFormatBase to take interface arguments (Solomon 
Duskis) (stack: rev 7b64e7d7df757c3d58b564e01245109cb8b9f660)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReader.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/MetaScanner.java


 Change TableInputFormatBase to take interface arguments
 ---

 Key: HBASE-11879
 URL: https://issues.apache.org/jira/browse/HBASE-11879
 Project: HBase
  Issue Type: Improvement
Reporter: Carter
Assignee: Solomon Duskis
 Fix For: 2.0.0, 0.99.1

 Attachments: 11879v4.txt, HBASE-11879_v2.patch, HBASE-11879_v3.patch, 
 HBASE_11879.patch, HBASE_11879_v1.patch


 As part of the ongoing interface abstraction work, I'm now investigating 
 {{TableInputFormatBase}}, which has two methods that break encapsulation:
 {code}
 protected HTable getHTable();
 protected void setHTable(HTable table);
 {code}
 While these are protected methods, the base @InterfaceAudience.Public is 
 abstract, meaning that it supports extension by user code.
 I propose deprecating these two methods and replacing them with these four, 
 once the Table interface is merged:
 {code}
 protected Table getTable();
 protected void setTable(Table table);
 protected RegionLocator getRegionLocator();
 protected void setRegionLocator(RegionLocator regionLocator);
 {code}
 Since users will frequently call {{setTable}} and {{setRegionLocator}} 
 together, it probably also makes sense to add the following convenience 
 method:
 {code}
 protected void setTableAndRegionLocator(Table table, RegionLocator 
 regionLocator);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12134) publish_website.sh script is too optimistic

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154592#comment-14154592
 ] 

Hudson commented on HBASE-12134:


FAILURE: Integrated in HBase-1.0 #253 (See 
[https://builds.apache.org/job/HBase-1.0/253/])
HBASE-12134 publish_hbase_website.sh script can delete the website accidentally 
(mstanleyjones: rev 3a4be7f2ace0f38dc5c4967a586141b86d703eb3)
* dev-support/publish_hbase_website.sh


 publish_website.sh script is too optimistic
 ---

 Key: HBASE-12134
 URL: https://issues.apache.org/jira/browse/HBASE-12134
 Project: HBase
  Issue Type: Bug
  Components: scripts
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
 Fix For: 2.0.0, 0.99.1

 Attachments: HBASE-12134-v1.patch, HBASE-12134.patch


 The script doesn't check for the status of the website build commands. This 
 means it will happily blow away the whole website assuming a failed mvn site 
 command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12112) Avoid KeyValueUtil#ensureKeyValue some more simple cases

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154593#comment-14154593
 ] 

Hudson commented on HBASE-12112:


FAILURE: Integrated in HBase-1.0 #253 (See 
[https://builds.apache.org/job/HBase-1.0/253/])
HBASE-12112 Avoid KeyValueUtil#ensureKeyValue some more simple cases. 
(anoopsamjohn: rev ebe74abda9358c9a5dad2ef8c1b800e7dc4aaf26)
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV3.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientSmallScanner.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileManager.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientSmallReversedScanner.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/Result.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/Increment.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DefaultStoreFileManager.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/CellKey.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StripeStoreFileManager.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java


 Avoid KeyValueUtil#ensureKeyValue some more simple cases
 

 Key: HBASE-12112
 URL: https://issues.apache.org/jira/browse/HBASE-12112
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 0.99.1

 Attachments: HBASE-12112.patch, HBASE-12112_V2.patch, 
 HBASE-12112_V2.patch, HBASE-12112_V4.patch, HBASE-12112_V4.patch


 This include fixes with
 - Replace KeyValue#heapSize() with CellUtil#estimatedHeapSizeOf(Cell)
 - Printing the key portion of a cell (rk+cf+q+ts+type). These are in 
 Exception messages
 - HFilePrettyPrinter - Avoiding ensureKeyValue() calls and calls to 
 cell#getxxx() which involves bytes copying. This is not a hot area still we 
 can avoid as much usage of deprecated methods as possible in core code. I 
 believe these bytes copying methods are used in many other parts and later we 
 can try fixing those as per area importance
 - Creating CellUtil#createKeyOnlyCell and using that in KeyOnlyFilter



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12136) Race condition between client adding tableCF replication znode and server triggering TableCFsTracker

2014-10-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154607#comment-14154607
 ] 

Hadoop QA commented on HBASE-12136:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12672271/HBASE-12136.patch
  against trunk revision .
  ATTACHMENT ID: 12672271

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 5 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.master.TestDistributedLogSplitting

 {color:red}-1 core zombie tests{color}.  There are 7 zombie test(s):   
at 
org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesSplitRecovery.testSplitWhileBulkLoadPhase(TestLoadIncrementalHFilesSplitRecovery.java:339)
at 
org.apache.hadoop.hbase.replication.regionserver.TestReplicationHLogReaderManager.test(TestReplicationHLogReaderManager.java:180)
at 
org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.testSimpleLoad(TestLoadIncrementalHFiles.java:100)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11164//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11164//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11164//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11164//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11164//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11164//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11164//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11164//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11164//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11164//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11164//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11164//console

This message is automatically generated.

 Race condition between client adding tableCF replication znode and  server 
 triggering TableCFsTracker
 -

 Key: HBASE-12136
 URL: https://issues.apache.org/jira/browse/HBASE-12136
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.98.6
Reporter: Virag Kothari
Assignee: Virag Kothari
 Attachments: HBASE-12136.patch


 In ReplicationPeersZKImpl.addPeer(), there is a race between client creating 
 tableCf znode and the server triggering  TableCFsTracker. If the server wins, 
 it wont be able to read the data set on tableCF znode and  replication will 
 be misconfigured 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12139) StochasticLoadBalancer doesn't work on large lightly loaded clusters

2014-10-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154610#comment-14154610
 ] 

Hadoop QA commented on HBASE-12139:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12672281/0001-HBASE-12139-StochasticLoadBalancer-doesn-t-work-on-l.patch
  against trunk revision .
  ATTACHMENT ID: 12672281

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 7 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 5 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.master.TestDistributedLogSplitting

 {color:red}-1 core zombie tests{color}.  There are 2 zombie test(s):   
at 
org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.testSimpleLoad(TestLoadIncrementalHFiles.java:100)
at 
org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesSplitRecovery.testSplitWhileBulkLoadPhase(TestLoadIncrementalHFilesSplitRecovery.java:339)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11165//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11165//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11165//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11165//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11165//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11165//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11165//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11165//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11165//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11165//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11165//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11165//console

This message is automatically generated.

 StochasticLoadBalancer doesn't work on large lightly loaded clusters
 

 Key: HBASE-12139
 URL: https://issues.apache.org/jira/browse/HBASE-12139
 Project: HBase
  Issue Type: Bug
  Components: Balancer, master
Affects Versions: 0.99.0, 2.0.0, 0.98.6.1
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Critical
 Fix For: 2.0.0, 0.98.7, 0.99.1

 Attachments: 
 0001-HBASE-12139-StochasticLoadBalancer-doesn-t-work-on-l-0.98.patch, 
 0001-HBASE-12139-StochasticLoadBalancer-doesn-t-work-on-l.patch


 If you have a cluster with  200 nodes and a small table ( 50 regions) the 
 StochasticLoadBalancer doesn't work at all. It is not correctly scaling the 
 required values.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-10200) Better error message when HttpServer fails to start due to java.net.BindException

2014-10-01 Thread Kiran Kumar M R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar M R updated HBASE-10200:

Attachment: HBASE-10020-V3.patch

 Better error message when HttpServer fails to start due to 
 java.net.BindException
 -

 Key: HBASE-10200
 URL: https://issues.apache.org/jira/browse/HBASE-10200
 Project: HBase
  Issue Type: Task
Affects Versions: 2.0.0
Reporter: Ted Yu
Assignee: Kiran Kumar M R
Priority: Minor
  Labels: beginner
 Fix For: 2.0.0, 0.99.1

 Attachments: 10200.out, HBASE-10020-V2.patch, HBASE-10020-V3.patch, 
 HBASE-10020.patch, HBASE-10020.patch


 Starting HBase using Hoya, I saw the following in log:
 {code}
 2013-12-17 21:49:06,758 INFO  [master:hor12n19:42587] http.HttpServer: 
 HttpServer.start() threw a non Bind IOException
 java.net.BindException: Port in use: hor12n14.gq1.ygridcore.net:12432
 at org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:742)
 at org.apache.hadoop.http.HttpServer.start(HttpServer.java:686)
 at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:586)
 at java.lang.Thread.run(Thread.java:722)
 Caused by: java.net.BindException: Cannot assign requested address
 at sun.nio.ch.Net.bind0(Native Method)
 at sun.nio.ch.Net.bind(Net.java:344)
 at sun.nio.ch.Net.bind(Net.java:336)
 at 
 sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:199)
 at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
 at 
 org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
 at org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:738)
 {code}
 This was due to hbase.master.info.bindAddress giving static address but Hoya 
 allocates master dynamically.
 Better error message should be provided: when bindAddress points another host 
 than local host, message should remind user to remove / adjust 
 hbase.master.info.bindAddress config param from hbase-site.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10200) Better error message when HttpServer fails to start due to java.net.BindException

2014-10-01 Thread Kiran Kumar M R (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154632#comment-14154632
 ] 

Kiran Kumar M R commented on HBASE-10200:
-


bq. {color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100
Added new patch addressing line length errors 

bq. {color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 2.0.3) warnings.
Findbugs warnings are not related to classes changed in patch

Failed unit tests are passing locally

 Better error message when HttpServer fails to start due to 
 java.net.BindException
 -

 Key: HBASE-10200
 URL: https://issues.apache.org/jira/browse/HBASE-10200
 Project: HBase
  Issue Type: Task
Affects Versions: 2.0.0
Reporter: Ted Yu
Assignee: Kiran Kumar M R
Priority: Minor
  Labels: beginner
 Fix For: 2.0.0, 0.99.1

 Attachments: 10200.out, HBASE-10020-V2.patch, HBASE-10020-V3.patch, 
 HBASE-10020.patch, HBASE-10020.patch


 Starting HBase using Hoya, I saw the following in log:
 {code}
 2013-12-17 21:49:06,758 INFO  [master:hor12n19:42587] http.HttpServer: 
 HttpServer.start() threw a non Bind IOException
 java.net.BindException: Port in use: hor12n14.gq1.ygridcore.net:12432
 at org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:742)
 at org.apache.hadoop.http.HttpServer.start(HttpServer.java:686)
 at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:586)
 at java.lang.Thread.run(Thread.java:722)
 Caused by: java.net.BindException: Cannot assign requested address
 at sun.nio.ch.Net.bind0(Native Method)
 at sun.nio.ch.Net.bind(Net.java:344)
 at sun.nio.ch.Net.bind(Net.java:336)
 at 
 sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:199)
 at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
 at 
 org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
 at org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:738)
 {code}
 This was due to hbase.master.info.bindAddress giving static address but Hoya 
 allocates master dynamically.
 Better error message should be provided: when bindAddress points another host 
 than local host, message should remind user to remove / adjust 
 hbase.master.info.bindAddress config param from hbase-site.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12112) Avoid KeyValueUtil#ensureKeyValue some more simple cases

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154638#comment-14154638
 ] 

Hudson commented on HBASE-12112:


FAILURE: Integrated in HBase-TRUNK #5598 (See 
[https://builds.apache.org/job/HBase-TRUNK/5598/])
HBASE-12112 Avoid KeyValueUtil#ensureKeyValue some more simple cases. 
(anoopsamjohn: rev 4fac4c1ba6fa1a0b30a798d3d1f2a8f803a5c531)
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StripeStoreFileManager.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV3.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/Increment.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/CellKey.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientSmallReversedScanner.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DefaultStoreFileManager.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileManager.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientSmallScanner.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/Result.java


 Avoid KeyValueUtil#ensureKeyValue some more simple cases
 

 Key: HBASE-12112
 URL: https://issues.apache.org/jira/browse/HBASE-12112
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 0.99.1

 Attachments: HBASE-12112.patch, HBASE-12112_V2.patch, 
 HBASE-12112_V2.patch, HBASE-12112_V4.patch, HBASE-12112_V4.patch


 This include fixes with
 - Replace KeyValue#heapSize() with CellUtil#estimatedHeapSizeOf(Cell)
 - Printing the key portion of a cell (rk+cf+q+ts+type). These are in 
 Exception messages
 - HFilePrettyPrinter - Avoiding ensureKeyValue() calls and calls to 
 cell#getxxx() which involves bytes copying. This is not a hot area still we 
 can avoid as much usage of deprecated methods as possible in core code. I 
 believe these bytes copying methods are used in many other parts and later we 
 can try fixing those as per area importance
 - Creating CellUtil#createKeyOnlyCell and using that in KeyOnlyFilter



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12137) Alter table add cf doesn't do compression test

2014-10-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154699#comment-14154699
 ] 

Hadoop QA commented on HBASE-12137:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12672272/HBASE-12137.patch
  against trunk revision .
  ATTACHMENT ID: 12672272

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 5 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to cause Findbugs 
(version 2.0.3) to fail.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.replication.regionserver.TestReplicationThrottler

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11163//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11163//console

This message is automatically generated.

 Alter table add cf doesn't do compression test
 --

 Key: HBASE-12137
 URL: https://issues.apache.org/jira/browse/HBASE-12137
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.98.6
Reporter: Virag Kothari
Assignee: Virag Kothari
 Attachments: HBASE-12137.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12136) Race condition between client adding tableCF replication znode and server triggering TableCFsTracker

2014-10-01 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154716#comment-14154716
 ] 

Ted Yu commented on HBASE-12136:


+1

 Race condition between client adding tableCF replication znode and  server 
 triggering TableCFsTracker
 -

 Key: HBASE-12136
 URL: https://issues.apache.org/jira/browse/HBASE-12136
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.98.6
Reporter: Virag Kothari
Assignee: Virag Kothari
 Attachments: HBASE-12136.patch


 In ReplicationPeersZKImpl.addPeer(), there is a race between client creating 
 tableCf znode and the server triggering  TableCFsTracker. If the server wins, 
 it wont be able to read the data set on tableCF znode and  replication will 
 be misconfigured 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10200) Better error message when HttpServer fails to start due to java.net.BindException

2014-10-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154718#comment-14154718
 ] 

Hadoop QA commented on HBASE-10200:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12672293/HBASE-10020-V3.patch
  against trunk revision .
  ATTACHMENT ID: 12672293

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 5 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.master.TestDistributedLogSplitting
  org.apache.hadoop.hbase.mapred.TestTableSnapshotInputFormat
  org.apache.hadoop.hbase.mapreduce.TestTableSnapshotInputFormat
  org.apache.hadoop.hbase.TestZooKeeper

 {color:red}-1 core zombie tests{color}.  There are 2 zombie test(s):   
at 
org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesSplitRecovery.testSplitWhileBulkLoadPhase(TestLoadIncrementalHFilesSplitRecovery.java:339)
at 
org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.testSimpleLoad(TestLoadIncrementalHFiles.java:100)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11166//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11166//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11166//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11166//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11166//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11166//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11166//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11166//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11166//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11166//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11166//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11166//console

This message is automatically generated.

 Better error message when HttpServer fails to start due to 
 java.net.BindException
 -

 Key: HBASE-10200
 URL: https://issues.apache.org/jira/browse/HBASE-10200
 Project: HBase
  Issue Type: Task
Affects Versions: 2.0.0
Reporter: Ted Yu
Assignee: Kiran Kumar M R
Priority: Minor
  Labels: beginner
 Fix For: 2.0.0, 0.99.1

 Attachments: 10200.out, HBASE-10020-V2.patch, HBASE-10020-V3.patch, 
 HBASE-10020.patch, HBASE-10020.patch


 Starting HBase using Hoya, I saw the following in log:
 {code}
 2013-12-17 21:49:06,758 INFO  [master:hor12n19:42587] http.HttpServer: 
 HttpServer.start() threw a non Bind IOException
 java.net.BindException: Port in use: hor12n14.gq1.ygridcore.net:12432
 at org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:742)
 at 

[jira] [Commented] (HBASE-12137) Alter table add cf doesn't do compression test

2014-10-01 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154786#comment-14154786
 ] 

Jean-Marc Spaggiari commented on HBASE-12137:
-

LGTM.

Is if possible to rename descriptor to columnDescriptor?

 Alter table add cf doesn't do compression test
 --

 Key: HBASE-12137
 URL: https://issues.apache.org/jira/browse/HBASE-12137
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.98.6
Reporter: Virag Kothari
Assignee: Virag Kothari
 Attachments: HBASE-12137.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12133) Add FastLongHistogram for metric computation

2014-10-01 Thread Yi Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154814#comment-14154814
 ] 

Yi Deng commented on HBASE-12133:
-

[~manukranthk] Thanks!

Checking over the javadoc warnings file: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11159//artifact/patchprocess/patchJavadocWarnings.txt
I didn't find a warning related to my change. Does that mean it's a false 
alarm? Same question to the coretest.

 Add FastLongHistogram for metric computation
 

 Key: HBASE-12133
 URL: https://issues.apache.org/jira/browse/HBASE-12133
 Project: HBase
  Issue Type: New Feature
  Components: metrics
Affects Versions: 0.98.8
Reporter: Yi Deng
Assignee: Yi Deng
Priority: Minor
  Labels: histogram, metrics
 Fix For: 0.98.8

 Attachments: 
 0001-Add-FastLongHistogram-for-fast-histogram-estimation.patch


 FastLongHistogram is a thread-safe class that estimate distribution of data 
 and computes the quantiles. It's useful for computing aggregated metrics like 
 P99/P95.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12133) Add FastLongHistogram for metric computation

2014-10-01 Thread Yi Deng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Deng updated HBASE-12133:

Attachment: 0001-Add-FastLongHistogram-for-fast-histogram-estimation.patch

 Add FastLongHistogram for metric computation
 

 Key: HBASE-12133
 URL: https://issues.apache.org/jira/browse/HBASE-12133
 Project: HBase
  Issue Type: New Feature
  Components: metrics
Affects Versions: 0.98.8
Reporter: Yi Deng
Assignee: Yi Deng
Priority: Minor
  Labels: histogram, metrics
 Fix For: 0.98.8

 Attachments: 
 0001-Add-FastLongHistogram-for-fast-histogram-estimation.patch, 
 0001-Add-FastLongHistogram-for-fast-histogram-estimation.patch


 FastLongHistogram is a thread-safe class that estimate distribution of data 
 and computes the quantiles. It's useful for computing aggregated metrics like 
 P99/P95.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12127) Move the core Connection creation functionality into ConnectionFactory

2014-10-01 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154819#comment-14154819
 ] 

Jean-Marc Spaggiari commented on HBASE-12127:
-

Don't see anything wrong with that.

 Move the core Connection creation functionality into ConnectionFactory
 --

 Key: HBASE-12127
 URL: https://issues.apache.org/jira/browse/HBASE-12127
 Project: HBase
  Issue Type: New Feature
Affects Versions: 1.0.0, 0.99.1
Reporter: Solomon Duskis
Assignee: Solomon Duskis
 Fix For: 2.0.0, 0.99.1

 Attachments: HBASE-12127.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12140) Add ConnectionFactory.createConnection() to create using default HBaseConfiguration.

2014-10-01 Thread Jean-Marc Spaggiari (JIRA)
Jean-Marc Spaggiari created HBASE-12140:
---

 Summary: Add ConnectionFactory.createConnection() to create using 
default HBaseConfiguration.
 Key: HBASE-12140
 URL: https://issues.apache.org/jira/browse/HBASE-12140
 Project: HBase
  Issue Type: Bug
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: HBASE-12140-v1-trunk.patch

Add
{code}
ConnectionFactory.createConnection();
{code}

which creates a connection with a default config.

A shortcut for
{code}
Configuration conf = HBaseConfiguration.create();
Connection connection = ConnectionFactory.createConnection(conf);
HBaseAdmin admin = (HBaseAdmin)connection.getAdmin();
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12140) Add ConnectionFactory.createConnection() to create using default HBaseConfiguration.

2014-10-01 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-12140:

Attachment: HBASE-12140-v1-trunk.patch

 Add ConnectionFactory.createConnection() to create using default 
 HBaseConfiguration.
 

 Key: HBASE-12140
 URL: https://issues.apache.org/jira/browse/HBASE-12140
 Project: HBase
  Issue Type: Bug
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: HBASE-12140-v1-trunk.patch


 Add
 {code}
 ConnectionFactory.createConnection();
 {code}
 which creates a connection with a default config.
 A shortcut for
 {code}
 Configuration conf = HBaseConfiguration.create();
 Connection connection = ConnectionFactory.createConnection(conf);
 HBaseAdmin admin = (HBaseAdmin)connection.getAdmin();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12140) Add ConnectionFactory.createConnection() to create using default HBaseConfiguration.

2014-10-01 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-12140:

Status: Patch Available  (was: Open)

 Add ConnectionFactory.createConnection() to create using default 
 HBaseConfiguration.
 

 Key: HBASE-12140
 URL: https://issues.apache.org/jira/browse/HBASE-12140
 Project: HBase
  Issue Type: Bug
  Components: Client
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: HBASE-12140-v1-trunk.patch


 Add
 {code}
 ConnectionFactory.createConnection();
 {code}
 which creates a connection with a default config.
 A shortcut for
 {code}
 Configuration conf = HBaseConfiguration.create();
 Connection connection = ConnectionFactory.createConnection(conf);
 HBaseAdmin admin = (HBaseAdmin)connection.getAdmin();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12140) Add ConnectionFactory.createConnection() to create using default HBaseConfiguration.

2014-10-01 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-12140:

Component/s: Client

 Add ConnectionFactory.createConnection() to create using default 
 HBaseConfiguration.
 

 Key: HBASE-12140
 URL: https://issues.apache.org/jira/browse/HBASE-12140
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.99.1
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: HBASE-12140-v1-trunk.patch


 Add
 {code}
 ConnectionFactory.createConnection();
 {code}
 which creates a connection with a default config.
 A shortcut for
 {code}
 Configuration conf = HBaseConfiguration.create();
 Connection connection = ConnectionFactory.createConnection(conf);
 HBaseAdmin admin = (HBaseAdmin)connection.getAdmin();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12083) Deprecate new HBaseAdmin() in favor of Connection.getAdmin()

2014-10-01 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154832#comment-14154832
 ] 

Jean-Marc Spaggiari commented on HBASE-12083:
-

Created HBASE-12140 and submited a patch too.

 Deprecate new HBaseAdmin() in favor of Connection.getAdmin()
 

 Key: HBASE-12083
 URL: https://issues.apache.org/jira/browse/HBASE-12083
 Project: HBase
  Issue Type: Bug
Reporter: Solomon Duskis
Assignee: Solomon Duskis
Priority: Critical
 Fix For: 2.0.0, 0.99.1






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12140) Add ConnectionFactory.createConnection() to create using default HBaseConfiguration.

2014-10-01 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-12140:

Affects Version/s: 0.99.1

 Add ConnectionFactory.createConnection() to create using default 
 HBaseConfiguration.
 

 Key: HBASE-12140
 URL: https://issues.apache.org/jira/browse/HBASE-12140
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.99.1
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: HBASE-12140-v1-trunk.patch


 Add
 {code}
 ConnectionFactory.createConnection();
 {code}
 which creates a connection with a default config.
 A shortcut for
 {code}
 Configuration conf = HBaseConfiguration.create();
 Connection connection = ConnectionFactory.createConnection(conf);
 HBaseAdmin admin = (HBaseAdmin)connection.getAdmin();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12095) SecureWALCellCodec should handle the case where encryption is disabled

2014-10-01 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12095:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

 SecureWALCellCodec should handle the case where encryption is disabled
 --

 Key: HBASE-12095
 URL: https://issues.apache.org/jira/browse/HBASE-12095
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6
Reporter: Ashish Singhi
Assignee: Ted Yu
 Fix For: 2.0.0, 0.98.7, 0.99.1

 Attachments: 12095-v1.txt, 12095-v1.txt, 12095-v2.txt


 I observed that when I have the following value set in my hbase-site.xml file
 {code}
 property
 namehbase.regionserver.wal.encryption/name
 valuefalse/value
 /property
 property
 namehbase.regionserver.hlog.reader.impl/name
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogReader/value
 /property
 property
 namehbase.regionserver.hlog.writer.impl/name
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogWriter/value
 /property
 {code}
 And while log splitting on hbase service restart, master shutdown with 
 following exception.
 Exception in master log
 {code}
 2014-09-24 17:14:28,590 FATAL [master:host-10-18-40-18:6] master.HMaster: 
 Master server abort: loaded coprocessors are: 
 [org.apache.hadoop.hbase.security.access.AccessController]
 2014-09-24 17:14:28,590 FATAL [master:host-10-18-40-18:6] master.HMaster: 
 Unhandled exception. Starting shutdown.
 java.io.IOException: error or interrupted while splitting logs in 
 [hdfs://10.18.40.18:8020/tmp/hbase-ashish/hbase/WALs/host-10-18-40-18,60020,1411558717849-splitting]
  Task = installed = 6 done = 0 error = 6
 at 
 org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:378)
 at 
 org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:415)
 at 
 org.apache.hadoop.hbase.master.MasterFileSystem.splitMetaLog(MasterFileSystem.java:307)
 at 
 org.apache.hadoop.hbase.master.MasterFileSystem.splitMetaLog(MasterFileSystem.java:298)
 at 
 org.apache.hadoop.hbase.master.HMaster.splitMetaLogBeforeAssignment(HMaster.java:1071)
 at 
 org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:863)
 at 
 org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:612)
 at java.lang.Thread.run(Thread.java:745)
 Exception in region server log
 2014-09-24 20:10:16,535 WARN  [RS_LOG_REPLAY_OPS-host-10-18-40-18:60020-1] 
 regionserver.SplitLogWorker: log splitting of 
 WALs/host-10-18-40-18,60020,1411558717849-splitting/host-10-18-40-18%2C60020%2C1411558717849.1411558724316.meta
  failed, returning error
 java.io.IOException: Cannot get log reader
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:161)
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89)
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:660)
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:569)
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:282)
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:225)
 at 
 org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:143)
 at 
 org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:82)
 at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.UnsupportedOperationException: Unable to find suitable 
 constructor for class 
 org.apache.hadoop.hbase.regionserver.wal.SecureWALCellCodec
 at 
 org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:39)
 at 
 org.apache.hadoop.hbase.regionserver.wal.WALCellCodec.create(WALCellCodec.java:101)
 at 
 org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.getCodec(ProtobufLogReader.java:242)
 at 
 org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:247)
 at 
 

[jira] [Commented] (HBASE-12095) SecureWALCellCodec should handle the case where encryption is disabled

2014-10-01 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154859#comment-14154859
 ] 

Ted Yu commented on HBASE-12095:


Integrated to 0.98, branch-1 and master.

Thanks for the reviews, Anoop and Andy.

 SecureWALCellCodec should handle the case where encryption is disabled
 --

 Key: HBASE-12095
 URL: https://issues.apache.org/jira/browse/HBASE-12095
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6
Reporter: Ashish Singhi
Assignee: Ted Yu
 Fix For: 2.0.0, 0.98.7, 0.99.1

 Attachments: 12095-v1.txt, 12095-v1.txt, 12095-v2.txt


 I observed that when I have the following value set in my hbase-site.xml file
 {code}
 property
 namehbase.regionserver.wal.encryption/name
 valuefalse/value
 /property
 property
 namehbase.regionserver.hlog.reader.impl/name
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogReader/value
 /property
 property
 namehbase.regionserver.hlog.writer.impl/name
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogWriter/value
 /property
 {code}
 And while log splitting on hbase service restart, master shutdown with 
 following exception.
 Exception in master log
 {code}
 2014-09-24 17:14:28,590 FATAL [master:host-10-18-40-18:6] master.HMaster: 
 Master server abort: loaded coprocessors are: 
 [org.apache.hadoop.hbase.security.access.AccessController]
 2014-09-24 17:14:28,590 FATAL [master:host-10-18-40-18:6] master.HMaster: 
 Unhandled exception. Starting shutdown.
 java.io.IOException: error or interrupted while splitting logs in 
 [hdfs://10.18.40.18:8020/tmp/hbase-ashish/hbase/WALs/host-10-18-40-18,60020,1411558717849-splitting]
  Task = installed = 6 done = 0 error = 6
 at 
 org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:378)
 at 
 org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:415)
 at 
 org.apache.hadoop.hbase.master.MasterFileSystem.splitMetaLog(MasterFileSystem.java:307)
 at 
 org.apache.hadoop.hbase.master.MasterFileSystem.splitMetaLog(MasterFileSystem.java:298)
 at 
 org.apache.hadoop.hbase.master.HMaster.splitMetaLogBeforeAssignment(HMaster.java:1071)
 at 
 org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:863)
 at 
 org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:612)
 at java.lang.Thread.run(Thread.java:745)
 Exception in region server log
 2014-09-24 20:10:16,535 WARN  [RS_LOG_REPLAY_OPS-host-10-18-40-18:60020-1] 
 regionserver.SplitLogWorker: log splitting of 
 WALs/host-10-18-40-18,60020,1411558717849-splitting/host-10-18-40-18%2C60020%2C1411558717849.1411558724316.meta
  failed, returning error
 java.io.IOException: Cannot get log reader
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:161)
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89)
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:660)
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:569)
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:282)
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:225)
 at 
 org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:143)
 at 
 org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:82)
 at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.UnsupportedOperationException: Unable to find suitable 
 constructor for class 
 org.apache.hadoop.hbase.regionserver.wal.SecureWALCellCodec
 at 
 org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:39)
 at 
 org.apache.hadoop.hbase.regionserver.wal.WALCellCodec.create(WALCellCodec.java:101)
 at 
 org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.getCodec(ProtobufLogReader.java:242)
 at 
 org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:247)
 at 
 

[jira] [Updated] (HBASE-12133) Add FastLongHistogram for metric computation

2014-10-01 Thread Yi Deng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Deng updated HBASE-12133:

Attachment: 0001-Add-FastLongHistogram-for-fast-histogram-estimation.patch

Add @InterfaceAudience and @InterfaceStability annotations.

 Add FastLongHistogram for metric computation
 

 Key: HBASE-12133
 URL: https://issues.apache.org/jira/browse/HBASE-12133
 Project: HBase
  Issue Type: New Feature
  Components: metrics
Affects Versions: 0.98.8
Reporter: Yi Deng
Assignee: Yi Deng
Priority: Minor
  Labels: histogram, metrics
 Fix For: 0.98.8

 Attachments: 
 0001-Add-FastLongHistogram-for-fast-histogram-estimation.patch, 
 0001-Add-FastLongHistogram-for-fast-histogram-estimation.patch, 
 0001-Add-FastLongHistogram-for-fast-histogram-estimation.patch


 FastLongHistogram is a thread-safe class that estimate distribution of data 
 and computes the quantiles. It's useful for computing aggregated metrics like 
 P99/P95.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12133) Add FastLongHistogram for metric computation

2014-10-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154900#comment-14154900
 ] 

Hadoop QA commented on HBASE-12133:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12672326/0001-Add-FastLongHistogram-for-fast-histogram-estimation.patch
  against trunk revision .
  ATTACHMENT ID: 12672326

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 5 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 2 zombie test(s):   
at 
org.apache.hadoop.hbase.regionserver.TestMultiVersionConsistencyControl.testParallelism(TestMultiVersionConsistencyControl.java:116)
at 
org.apache.hadoop.hbase.http.TestHttpServerLifecycle.testStartedServerWithRequestLog(TestHttpServerLifecycle.java:92)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11167//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11167//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11167//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11167//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11167//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11167//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11167//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11167//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11167//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11167//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11167//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11167//console

This message is automatically generated.

 Add FastLongHistogram for metric computation
 

 Key: HBASE-12133
 URL: https://issues.apache.org/jira/browse/HBASE-12133
 Project: HBase
  Issue Type: New Feature
  Components: metrics
Affects Versions: 0.98.8
Reporter: Yi Deng
Assignee: Yi Deng
Priority: Minor
  Labels: histogram, metrics
 Fix For: 0.98.8

 Attachments: 
 0001-Add-FastLongHistogram-for-fast-histogram-estimation.patch, 
 0001-Add-FastLongHistogram-for-fast-histogram-estimation.patch, 
 0001-Add-FastLongHistogram-for-fast-histogram-estimation.patch


 FastLongHistogram is a thread-safe class that estimate distribution of data 
 and computes the quantiles. It's useful for computing aggregated metrics like 
 P99/P95.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12133) Add FastLongHistogram for metric computation

2014-10-01 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154923#comment-14154923
 ] 

stack commented on HBASE-12133:
---

[~daviddengcn] Sorry, our ci build is a bit of mess at the moment.  Let me 
spend some time today and try and clean it up some.

[~manukranthk] Yes, was referring to yammer histograms.  Was wondering 
difference between them and this.  You lot think this implementation 'faster'?

I'd actually like to use the setMin/setMax in another location.  I'd be up for 
adding this stuff if you add a client for this new util and if you had a rough 
compare against yammer that showed yours better.  Thanks.

 Add FastLongHistogram for metric computation
 

 Key: HBASE-12133
 URL: https://issues.apache.org/jira/browse/HBASE-12133
 Project: HBase
  Issue Type: New Feature
  Components: metrics
Affects Versions: 0.98.8
Reporter: Yi Deng
Assignee: Yi Deng
Priority: Minor
  Labels: histogram, metrics
 Fix For: 0.98.8

 Attachments: 
 0001-Add-FastLongHistogram-for-fast-histogram-estimation.patch, 
 0001-Add-FastLongHistogram-for-fast-histogram-estimation.patch, 
 0001-Add-FastLongHistogram-for-fast-histogram-estimation.patch


 FastLongHistogram is a thread-safe class that estimate distribution of data 
 and computes the quantiles. It's useful for computing aggregated metrics like 
 P99/P95.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12095) SecureWALCellCodec should handle the case where encryption is disabled

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154929#comment-14154929
 ] 

Hudson commented on HBASE-12095:


FAILURE: Integrated in HBase-TRUNK #5599 (See 
[https://builds.apache.org/job/HBase-TRUNK/5599/])
HBASE-12095 SecureWALCellCodec should handle the case where encryption is 
disabled (tedyu: rev 1587068a2c15b600eeaed8771cf4b1eebca71bfe)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestHLogReaderOnSecureHLog.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SecureWALCellCodec.java


 SecureWALCellCodec should handle the case where encryption is disabled
 --

 Key: HBASE-12095
 URL: https://issues.apache.org/jira/browse/HBASE-12095
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6
Reporter: Ashish Singhi
Assignee: Ted Yu
 Fix For: 2.0.0, 0.98.7, 0.99.1

 Attachments: 12095-v1.txt, 12095-v1.txt, 12095-v2.txt


 I observed that when I have the following value set in my hbase-site.xml file
 {code}
 property
 namehbase.regionserver.wal.encryption/name
 valuefalse/value
 /property
 property
 namehbase.regionserver.hlog.reader.impl/name
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogReader/value
 /property
 property
 namehbase.regionserver.hlog.writer.impl/name
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogWriter/value
 /property
 {code}
 And while log splitting on hbase service restart, master shutdown with 
 following exception.
 Exception in master log
 {code}
 2014-09-24 17:14:28,590 FATAL [master:host-10-18-40-18:6] master.HMaster: 
 Master server abort: loaded coprocessors are: 
 [org.apache.hadoop.hbase.security.access.AccessController]
 2014-09-24 17:14:28,590 FATAL [master:host-10-18-40-18:6] master.HMaster: 
 Unhandled exception. Starting shutdown.
 java.io.IOException: error or interrupted while splitting logs in 
 [hdfs://10.18.40.18:8020/tmp/hbase-ashish/hbase/WALs/host-10-18-40-18,60020,1411558717849-splitting]
  Task = installed = 6 done = 0 error = 6
 at 
 org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:378)
 at 
 org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:415)
 at 
 org.apache.hadoop.hbase.master.MasterFileSystem.splitMetaLog(MasterFileSystem.java:307)
 at 
 org.apache.hadoop.hbase.master.MasterFileSystem.splitMetaLog(MasterFileSystem.java:298)
 at 
 org.apache.hadoop.hbase.master.HMaster.splitMetaLogBeforeAssignment(HMaster.java:1071)
 at 
 org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:863)
 at 
 org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:612)
 at java.lang.Thread.run(Thread.java:745)
 Exception in region server log
 2014-09-24 20:10:16,535 WARN  [RS_LOG_REPLAY_OPS-host-10-18-40-18:60020-1] 
 regionserver.SplitLogWorker: log splitting of 
 WALs/host-10-18-40-18,60020,1411558717849-splitting/host-10-18-40-18%2C60020%2C1411558717849.1411558724316.meta
  failed, returning error
 java.io.IOException: Cannot get log reader
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:161)
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89)
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:660)
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:569)
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:282)
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:225)
 at 
 org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:143)
 at 
 org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:82)
 at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.UnsupportedOperationException: Unable to find suitable 
 constructor for class 
 org.apache.hadoop.hbase.regionserver.wal.SecureWALCellCodec
 at 
 org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:39)
 at 
 

[jira] [Updated] (HBASE-12140) Add ConnectionFactory.createConnection() to create using default HBaseConfiguration.

2014-10-01 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12140:
--
Fix Version/s: 0.99.1
   2.0.0

 Add ConnectionFactory.createConnection() to create using default 
 HBaseConfiguration.
 

 Key: HBASE-12140
 URL: https://issues.apache.org/jira/browse/HBASE-12140
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.99.1
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Fix For: 2.0.0, 0.99.1

 Attachments: HBASE-12140-v1-trunk.patch


 Add
 {code}
 ConnectionFactory.createConnection();
 {code}
 which creates a connection with a default config.
 A shortcut for
 {code}
 Configuration conf = HBaseConfiguration.create();
 Connection connection = ConnectionFactory.createConnection(conf);
 HBaseAdmin admin = (HBaseAdmin)connection.getAdmin();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12140) Add ConnectionFactory.createConnection() to create using default HBaseConfiguration.

2014-10-01 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154938#comment-14154938
 ] 

stack commented on HBASE-12140:
---

+1

Good by you [~enis] for branch-1?

 Add ConnectionFactory.createConnection() to create using default 
 HBaseConfiguration.
 

 Key: HBASE-12140
 URL: https://issues.apache.org/jira/browse/HBASE-12140
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.99.1
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Fix For: 2.0.0, 0.99.1

 Attachments: HBASE-12140-v1-trunk.patch


 Add
 {code}
 ConnectionFactory.createConnection();
 {code}
 which creates a connection with a default config.
 A shortcut for
 {code}
 Configuration conf = HBaseConfiguration.create();
 Connection connection = ConnectionFactory.createConnection(conf);
 HBaseAdmin admin = (HBaseAdmin)connection.getAdmin();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11764) Support per cell TTLs

2014-10-01 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11764:
---
Attachment: HBASE-11764.patch

Rebase

 Support per cell TTLs
 -

 Key: HBASE-11764
 URL: https://issues.apache.org/jira/browse/HBASE-11764
 Project: HBase
  Issue Type: Sub-task
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 2.0.0, 0.98.7, 0.99.1

 Attachments: HBASE-11764-0.98.patch, HBASE-11764.patch, 
 HBASE-11764.patch, HBASE-11764.patch, HBASE-11764.patch, HBASE-11764.patch, 
 HBASE-11764.patch, HBASE-11764.patch, HBASE-11764.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12095) SecureWALCellCodec should handle the case where encryption is disabled

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154966#comment-14154966
 ] 

Hudson commented on HBASE-12095:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #526 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/526/])
HBASE-12095 SecureWALCellCodec should handle the case where encryption is 
disabled (tedyu: rev e78224bd66a0adbd08e6eea9a3dd494d17f861c3)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestHLogReaderOnSecureHLog.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SecureWALCellCodec.java


 SecureWALCellCodec should handle the case where encryption is disabled
 --

 Key: HBASE-12095
 URL: https://issues.apache.org/jira/browse/HBASE-12095
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6
Reporter: Ashish Singhi
Assignee: Ted Yu
 Fix For: 2.0.0, 0.98.7, 0.99.1

 Attachments: 12095-v1.txt, 12095-v1.txt, 12095-v2.txt


 I observed that when I have the following value set in my hbase-site.xml file
 {code}
 property
 namehbase.regionserver.wal.encryption/name
 valuefalse/value
 /property
 property
 namehbase.regionserver.hlog.reader.impl/name
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogReader/value
 /property
 property
 namehbase.regionserver.hlog.writer.impl/name
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogWriter/value
 /property
 {code}
 And while log splitting on hbase service restart, master shutdown with 
 following exception.
 Exception in master log
 {code}
 2014-09-24 17:14:28,590 FATAL [master:host-10-18-40-18:6] master.HMaster: 
 Master server abort: loaded coprocessors are: 
 [org.apache.hadoop.hbase.security.access.AccessController]
 2014-09-24 17:14:28,590 FATAL [master:host-10-18-40-18:6] master.HMaster: 
 Unhandled exception. Starting shutdown.
 java.io.IOException: error or interrupted while splitting logs in 
 [hdfs://10.18.40.18:8020/tmp/hbase-ashish/hbase/WALs/host-10-18-40-18,60020,1411558717849-splitting]
  Task = installed = 6 done = 0 error = 6
 at 
 org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:378)
 at 
 org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:415)
 at 
 org.apache.hadoop.hbase.master.MasterFileSystem.splitMetaLog(MasterFileSystem.java:307)
 at 
 org.apache.hadoop.hbase.master.MasterFileSystem.splitMetaLog(MasterFileSystem.java:298)
 at 
 org.apache.hadoop.hbase.master.HMaster.splitMetaLogBeforeAssignment(HMaster.java:1071)
 at 
 org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:863)
 at 
 org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:612)
 at java.lang.Thread.run(Thread.java:745)
 Exception in region server log
 2014-09-24 20:10:16,535 WARN  [RS_LOG_REPLAY_OPS-host-10-18-40-18:60020-1] 
 regionserver.SplitLogWorker: log splitting of 
 WALs/host-10-18-40-18,60020,1411558717849-splitting/host-10-18-40-18%2C60020%2C1411558717849.1411558724316.meta
  failed, returning error
 java.io.IOException: Cannot get log reader
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:161)
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89)
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:660)
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:569)
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:282)
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:225)
 at 
 org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:143)
 at 
 org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:82)
 at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.UnsupportedOperationException: Unable to find suitable 
 constructor for class 
 org.apache.hadoop.hbase.regionserver.wal.SecureWALCellCodec
 at 
 org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:39)
 

[jira] [Commented] (HBASE-12124) Closed region could stay closed if master stops at bad time

2014-10-01 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154974#comment-14154974
 ] 

Matteo Bertozzi commented on HBASE-12124:
-

+1

 Closed region could stay closed if master stops at bad time
 ---

 Key: HBASE-12124
 URL: https://issues.apache.org/jira/browse/HBASE-12124
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.99.1

 Attachments: hbase-12124.patch


 This applies to RPC-based region assignment only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12124) Closed region could stay closed if master stops at bad time

2014-10-01 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-12124:

   Resolution: Fixed
Fix Version/s: 0.98.7
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Integrated into branch 1 and 0.98. Thanks Matteo for reviewing it.

 Closed region could stay closed if master stops at bad time
 ---

 Key: HBASE-12124
 URL: https://issues.apache.org/jira/browse/HBASE-12124
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.98.7, 0.99.1

 Attachments: hbase-12124.patch


 This applies to RPC-based region assignment only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12141) ClusterStatus should frame protobuf payloads

2014-10-01 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12141:
---
Affects Version/s: 0.98.3

Ping [~nkeywal]

 ClusterStatus should frame protobuf payloads
 

 Key: HBASE-12141
 URL: https://issues.apache.org/jira/browse/HBASE-12141
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.3
Reporter: Andrew Purtell

 The multicast ClusterStatusPublisher and its companion listener are using 
 datagram channels without any framing. Netty's ProtobufDecoder expects a 
 complete PB message to be available in the ChannelBuffer. As one user 
 reported on list:
 {noformat}
 org.apache.hadoop.hbase.client.ClusterStatusListener - ERROR - Unexpected 
 exception, continuing.
 com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had 
 invalid wire type.
 at 
 com.google.protobuf.InvalidProtocolBufferException.invalidWireType(InvalidProtocolBufferException.java:99)
 at 
 com.google.protobuf.UnknownFieldSet$Builder.mergeFieldFrom(UnknownFieldSet.java:498)
 at 
 com.google.protobuf.GeneratedMessage.parseUnknownField(GeneratedMessage.java:193)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ClusterStatus.init(ClusterStatusProtos.java:7554)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ClusterStatus.init(ClusterStatusProtos.java:7512)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ClusterStatus$1.parsePartialFrom(ClusterStatusProtos.java:7689)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ClusterStatus$1.parsePartialFrom(ClusterStatusProtos.java:7684)
 at 
 com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:141)
 at 
 com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:176)
 at 
 com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:182)
 at 
 com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
 at 
 org.jboss.netty.handler.codec.protobuf.ProtobufDecoder.decode(ProtobufDecoder.java:122)
 at 
 org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:66)
 at 
 org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
 at 
 org.jboss.netty.channel.socket.oio.OioDatagramWorker.process(OioDatagramWorker.java:52)
 at 
 org.jboss.netty.channel.socket.oio.AbstractOioWorker.run(AbstractOioWorker.java:73)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}
 The javadoc for ProtobufDecoder says:
 {quote}
 Decodes a received ChannelBuffer into a Google Protocol Buffers Message and 
 MessageLite. Please note that this decoder must be used with a proper 
 FrameDecoder such as ProtobufVarint32FrameDecoder or 
 LengthFieldBasedFrameDecoder if you are using a stream-based transport such 
 as TCP/IP.
 {quote}
 and even though we are using a datagram transport we have related issues, 
 depending on what the sending and receiving OS does with overly large 
 datagrams:
 - We may receive a datagram with a truncated message
 - We may get an upcall when processing one fragment of a fragmented datagram, 
 where the complete message is not available yet
 - We may not be able to send the overly large ClusterStatus in the first 
 place. Linux claims to do PMTU and return EMSGSIZE if a datagram packet 
 payload exceeds the MTU, but will send a fragmented datagram if PMTU is 
 disabled. I'm surprised we have the above report given the default is to 
 reject overly large datagram payloads, so perhaps the user is using a 
 different server OS or Netty datagram channels do their own fragmentation (I 
 haven't checked).
 In any case, the server and client pipelines are definitely not doing any 
 kind of framing. This is the multicast status listener from 0.98 for example:
 {code}
   b.setPipeline(Channels.pipeline(
   new 
 ProtobufDecoder(ClusterStatusProtos.ClusterStatus.getDefaultInstance()),
   new ClusterStatusHandler()));
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12141) ClusterStatus should frame protobuf payloads

2014-10-01 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-12141:
--

 Summary: ClusterStatus should frame protobuf payloads
 Key: HBASE-12141
 URL: https://issues.apache.org/jira/browse/HBASE-12141
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell


The multicast ClusterStatusPublisher and its companion listener are using 
datagram channels without any framing. Netty's ProtobufDecoder expects a 
complete PB message to be available in the ChannelBuffer. As one user reported 
on list:
{noformat}
org.apache.hadoop.hbase.client.ClusterStatusListener - ERROR - Unexpected 
exception, continuing.
com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had 
invalid wire type.
at 
com.google.protobuf.InvalidProtocolBufferException.invalidWireType(InvalidProtocolBufferException.java:99)
at 
com.google.protobuf.UnknownFieldSet$Builder.mergeFieldFrom(UnknownFieldSet.java:498)
at 
com.google.protobuf.GeneratedMessage.parseUnknownField(GeneratedMessage.java:193)
at 
org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ClusterStatus.init(ClusterStatusProtos.java:7554)
at 
org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ClusterStatus.init(ClusterStatusProtos.java:7512)
at 
org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ClusterStatus$1.parsePartialFrom(ClusterStatusProtos.java:7689)
at 
org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ClusterStatus$1.parsePartialFrom(ClusterStatusProtos.java:7684)
at 
com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:141)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:176)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:182)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
at 
org.jboss.netty.handler.codec.protobuf.ProtobufDecoder.decode(ProtobufDecoder.java:122)
at 
org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:66)
at 
org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at 
org.jboss.netty.channel.socket.oio.OioDatagramWorker.process(OioDatagramWorker.java:52)
at 
org.jboss.netty.channel.socket.oio.AbstractOioWorker.run(AbstractOioWorker.java:73)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{noformat}

The javadoc for ProtobufDecoder says:
{quote}
Decodes a received ChannelBuffer into a Google Protocol Buffers Message and 
MessageLite. Please note that this decoder must be used with a proper 
FrameDecoder such as ProtobufVarint32FrameDecoder or 
LengthFieldBasedFrameDecoder if you are using a stream-based transport such as 
TCP/IP.
{quote}
and even though we are using a datagram transport we have related issues, 
depending on what the sending and receiving OS does with overly large datagrams:
- We may receive a datagram with a truncated message
- We may get an upcall when processing one fragment of a fragmented datagram, 
where the complete message is not available yet
- We may not be able to send the overly large ClusterStatus in the first place. 
Linux claims to do PMTU and return EMSGSIZE if a datagram packet payload 
exceeds the MTU, but will send a fragmented datagram if PMTU is disabled. I'm 
surprised we have the above report given the default is to reject overly large 
datagram payloads, so perhaps the user is using a different server OS or Netty 
datagram channels do their own fragmentation (I haven't checked).

In any case, the server and client pipelines are definitely not doing any kind 
of framing. This is the multicast status listener from 0.98 for example:
{code}
  b.setPipeline(Channels.pipeline(
  new 
ProtobufDecoder(ClusterStatusProtos.ClusterStatus.getDefaultInstance()),
  new ClusterStatusHandler()));
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12139) StochasticLoadBalancer doesn't work on large lightly loaded clusters

2014-10-01 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154995#comment-14154995
 ] 

Elliott Clark commented on HBASE-12139:
---

Tests pass locally. So just apache jenkins doing it's thing.

 StochasticLoadBalancer doesn't work on large lightly loaded clusters
 

 Key: HBASE-12139
 URL: https://issues.apache.org/jira/browse/HBASE-12139
 Project: HBase
  Issue Type: Bug
  Components: Balancer, master
Affects Versions: 0.99.0, 2.0.0, 0.98.6.1
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Critical
 Fix For: 2.0.0, 0.98.7, 0.99.1

 Attachments: 
 0001-HBASE-12139-StochasticLoadBalancer-doesn-t-work-on-l-0.98.patch, 
 0001-HBASE-12139-StochasticLoadBalancer-doesn-t-work-on-l.patch


 If you have a cluster with  200 nodes and a small table ( 50 regions) the 
 StochasticLoadBalancer doesn't work at all. It is not correctly scaling the 
 required values.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12141) ClusterStatus message might exceeded max datagram payload limits

2014-10-01 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12141:
---
Summary: ClusterStatus message might exceeded max datagram payload limits  
(was: ClusterStatus should frame protobuf payloads)

 ClusterStatus message might exceeded max datagram payload limits
 

 Key: HBASE-12141
 URL: https://issues.apache.org/jira/browse/HBASE-12141
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.3
Reporter: Andrew Purtell

 The multicast ClusterStatusPublisher and its companion listener are using 
 datagram channels without any framing. Netty's ProtobufDecoder expects a 
 complete PB message to be available in the ChannelBuffer. As one user 
 reported on list:
 {noformat}
 org.apache.hadoop.hbase.client.ClusterStatusListener - ERROR - Unexpected 
 exception, continuing.
 com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had 
 invalid wire type.
 at 
 com.google.protobuf.InvalidProtocolBufferException.invalidWireType(InvalidProtocolBufferException.java:99)
 at 
 com.google.protobuf.UnknownFieldSet$Builder.mergeFieldFrom(UnknownFieldSet.java:498)
 at 
 com.google.protobuf.GeneratedMessage.parseUnknownField(GeneratedMessage.java:193)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ClusterStatus.init(ClusterStatusProtos.java:7554)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ClusterStatus.init(ClusterStatusProtos.java:7512)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ClusterStatus$1.parsePartialFrom(ClusterStatusProtos.java:7689)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ClusterStatus$1.parsePartialFrom(ClusterStatusProtos.java:7684)
 at 
 com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:141)
 at 
 com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:176)
 at 
 com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:182)
 at 
 com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
 at 
 org.jboss.netty.handler.codec.protobuf.ProtobufDecoder.decode(ProtobufDecoder.java:122)
 at 
 org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:66)
 at 
 org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
 at 
 org.jboss.netty.channel.socket.oio.OioDatagramWorker.process(OioDatagramWorker.java:52)
 at 
 org.jboss.netty.channel.socket.oio.AbstractOioWorker.run(AbstractOioWorker.java:73)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}
 The javadoc for ProtobufDecoder says:
 {quote}
 Decodes a received ChannelBuffer into a Google Protocol Buffers Message and 
 MessageLite. Please note that this decoder must be used with a proper 
 FrameDecoder such as ProtobufVarint32FrameDecoder or 
 LengthFieldBasedFrameDecoder if you are using a stream-based transport such 
 as TCP/IP.
 {quote}
 and even though we are using a datagram transport we have related issues, 
 depending on what the sending and receiving OS does with overly large 
 datagrams:
 - We may receive a datagram with a truncated message
 - We may get an upcall when processing one fragment of a fragmented datagram, 
 where the complete message is not available yet
 - We may not be able to send the overly large ClusterStatus in the first 
 place. Linux claims to do PMTU and return EMSGSIZE if a datagram packet 
 payload exceeds the MTU, but will send a fragmented datagram if PMTU is 
 disabled. I'm surprised we have the above report given the default is to 
 reject overly large datagram payloads, so perhaps the user is using a 
 different server OS or Netty datagram channels do their own fragmentation (I 
 haven't checked).
 In any case, the server and client pipelines are definitely not doing any 
 kind of framing. This is the multicast status listener from 0.98 for example:
 {code}
   b.setPipeline(Channels.pipeline(
   new 
 ProtobufDecoder(ClusterStatusProtos.ClusterStatus.getDefaultInstance()),
   new ClusterStatusHandler()));
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12141) ClusterStatus message might exceeded max datagram payload limits

2014-10-01 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12141:
---
Description: 
The multicast ClusterStatusPublisher and its companion listener are using 
datagram channels without any framing. I think this is an issue because Netty's 
ProtobufDecoder expects a complete PB message to be available in the 
ChannelBuffer yet ClusterStatus messages can be large and might exceed the 
maximum datagram payload size. As one user reported on list:
{noformat}
org.apache.hadoop.hbase.client.ClusterStatusListener - ERROR - Unexpected 
exception, continuing.
com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had 
invalid wire type.
at 
com.google.protobuf.InvalidProtocolBufferException.invalidWireType(InvalidProtocolBufferException.java:99)
at 
com.google.protobuf.UnknownFieldSet$Builder.mergeFieldFrom(UnknownFieldSet.java:498)
at 
com.google.protobuf.GeneratedMessage.parseUnknownField(GeneratedMessage.java:193)
at 
org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ClusterStatus.init(ClusterStatusProtos.java:7554)
at 
org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ClusterStatus.init(ClusterStatusProtos.java:7512)
at 
org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ClusterStatus$1.parsePartialFrom(ClusterStatusProtos.java:7689)
at 
org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ClusterStatus$1.parsePartialFrom(ClusterStatusProtos.java:7684)
at 
com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:141)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:176)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:182)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
at 
org.jboss.netty.handler.codec.protobuf.ProtobufDecoder.decode(ProtobufDecoder.java:122)
at 
org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:66)
at 
org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at 
org.jboss.netty.channel.socket.oio.OioDatagramWorker.process(OioDatagramWorker.java:52)
at 
org.jboss.netty.channel.socket.oio.AbstractOioWorker.run(AbstractOioWorker.java:73)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{noformat}

The javadoc for ProtobufDecoder says:
{quote}
Decodes a received ChannelBuffer into a Google Protocol Buffers Message and 
MessageLite. Please note that this decoder must be used with a proper 
FrameDecoder such as ProtobufVarint32FrameDecoder or 
LengthFieldBasedFrameDecoder if you are using a stream-based transport such as 
TCP/IP.
{quote}
and even though we are using a datagram transport we have related issues, 
depending on what the sending and receiving OS does with overly large datagrams:
- We may receive a datagram with a truncated message
- We may get an upcall when processing one fragment of a fragmented datagram, 
where the complete message is not available yet
- We may not be able to send the overly large ClusterStatus in the first place. 
Linux claims to do PMTU and return EMSGSIZE if a datagram packet payload 
exceeds the MTU, but will send a fragmented datagram if PMTU is disabled. I'm 
surprised we have the above report given the default is to reject overly large 
datagram payloads, so perhaps the user is using a different server OS or Netty 
datagram channels do their own fragmentation (I haven't checked).

In any case, the server and client pipelines are definitely not doing any kind 
of framing. This is the multicast status listener from 0.98 for example:
{code}
  b.setPipeline(Channels.pipeline(
  new 
ProtobufDecoder(ClusterStatusProtos.ClusterStatus.getDefaultInstance()),
  new ClusterStatusHandler()));
{code}


  was:
The multicast ClusterStatusPublisher and its companion listener are using 
datagram channels without any framing. Netty's ProtobufDecoder expects a 
complete PB message to be available in the ChannelBuffer. As one user reported 
on list:
{noformat}
org.apache.hadoop.hbase.client.ClusterStatusListener - ERROR - Unexpected 
exception, continuing.
com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had 
invalid wire type.
at 
com.google.protobuf.InvalidProtocolBufferException.invalidWireType(InvalidProtocolBufferException.java:99)
at 
com.google.protobuf.UnknownFieldSet$Builder.mergeFieldFrom(UnknownFieldSet.java:498)
at 
com.google.protobuf.GeneratedMessage.parseUnknownField(GeneratedMessage.java:193)
at 

[jira] [Updated] (HBASE-12141) ClusterStatus message might exceed max datagram payload limits

2014-10-01 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12141:
---
Summary: ClusterStatus message might exceed max datagram payload limits  
(was: ClusterStatus message might exceeded max datagram payload limits)

 ClusterStatus message might exceed max datagram payload limits
 --

 Key: HBASE-12141
 URL: https://issues.apache.org/jira/browse/HBASE-12141
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.3
Reporter: Andrew Purtell

 The multicast ClusterStatusPublisher and its companion listener are using 
 datagram channels without any framing. I think this is an issue because 
 Netty's ProtobufDecoder expects a complete PB message to be available in the 
 ChannelBuffer yet ClusterStatus messages can be large and might exceed the 
 maximum datagram payload size. As one user reported on list:
 {noformat}
 org.apache.hadoop.hbase.client.ClusterStatusListener - ERROR - Unexpected 
 exception, continuing.
 com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had 
 invalid wire type.
 at 
 com.google.protobuf.InvalidProtocolBufferException.invalidWireType(InvalidProtocolBufferException.java:99)
 at 
 com.google.protobuf.UnknownFieldSet$Builder.mergeFieldFrom(UnknownFieldSet.java:498)
 at 
 com.google.protobuf.GeneratedMessage.parseUnknownField(GeneratedMessage.java:193)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ClusterStatus.init(ClusterStatusProtos.java:7554)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ClusterStatus.init(ClusterStatusProtos.java:7512)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ClusterStatus$1.parsePartialFrom(ClusterStatusProtos.java:7689)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ClusterStatus$1.parsePartialFrom(ClusterStatusProtos.java:7684)
 at 
 com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:141)
 at 
 com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:176)
 at 
 com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:182)
 at 
 com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
 at 
 org.jboss.netty.handler.codec.protobuf.ProtobufDecoder.decode(ProtobufDecoder.java:122)
 at 
 org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:66)
 at 
 org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
 at 
 org.jboss.netty.channel.socket.oio.OioDatagramWorker.process(OioDatagramWorker.java:52)
 at 
 org.jboss.netty.channel.socket.oio.AbstractOioWorker.run(AbstractOioWorker.java:73)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}
 The javadoc for ProtobufDecoder says:
 {quote}
 Decodes a received ChannelBuffer into a Google Protocol Buffers Message and 
 MessageLite. Please note that this decoder must be used with a proper 
 FrameDecoder such as ProtobufVarint32FrameDecoder or 
 LengthFieldBasedFrameDecoder if you are using a stream-based transport such 
 as TCP/IP.
 {quote}
 and even though we are using a datagram transport we have related issues, 
 depending on what the sending and receiving OS does with overly large 
 datagrams:
 - We may receive a datagram with a truncated message
 - We may get an upcall when processing one fragment of a fragmented datagram, 
 where the complete message is not available yet
 - We may not be able to send the overly large ClusterStatus in the first 
 place. Linux claims to do PMTU and return EMSGSIZE if a datagram packet 
 payload exceeds the MTU, but will send a fragmented datagram if PMTU is 
 disabled. I'm surprised we have the above report given the default is to 
 reject overly large datagram payloads, so perhaps the user is using a 
 different server OS or Netty datagram channels do their own fragmentation (I 
 haven't checked).
 In any case, the server and client pipelines are definitely not doing any 
 kind of framing. This is the multicast status listener from 0.98 for example:
 {code}
   b.setPipeline(Channels.pipeline(
   new 
 ProtobufDecoder(ClusterStatusProtos.ClusterStatus.getDefaultInstance()),
   new ClusterStatusHandler()));
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12136) Race condition between client adding tableCF replication znode and server triggering TableCFsTracker

2014-10-01 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12136:
---
Fix Version/s: 0.99.1
   0.98.7
   2.0.0

Set fix versions

 Race condition between client adding tableCF replication znode and  server 
 triggering TableCFsTracker
 -

 Key: HBASE-12136
 URL: https://issues.apache.org/jira/browse/HBASE-12136
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.98.6
Reporter: Virag Kothari
Assignee: Virag Kothari
 Fix For: 2.0.0, 0.98.7, 0.99.1

 Attachments: HBASE-12136.patch


 In ReplicationPeersZKImpl.addPeer(), there is a race between client creating 
 tableCf znode and the server triggering  TableCFsTracker. If the server wins, 
 it wont be able to read the data set on tableCF znode and  replication will 
 be misconfigured 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12095) SecureWALCellCodec should handle the case where encryption is disabled

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14155056#comment-14155056
 ] 

Hudson commented on HBASE-12095:


FAILURE: Integrated in HBase-0.98 #554 (See 
[https://builds.apache.org/job/HBase-0.98/554/])
HBASE-12095 SecureWALCellCodec should handle the case where encryption is 
disabled (tedyu: rev e78224bd66a0adbd08e6eea9a3dd494d17f861c3)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SecureWALCellCodec.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestHLogReaderOnSecureHLog.java


 SecureWALCellCodec should handle the case where encryption is disabled
 --

 Key: HBASE-12095
 URL: https://issues.apache.org/jira/browse/HBASE-12095
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6
Reporter: Ashish Singhi
Assignee: Ted Yu
 Fix For: 2.0.0, 0.98.7, 0.99.1

 Attachments: 12095-v1.txt, 12095-v1.txt, 12095-v2.txt


 I observed that when I have the following value set in my hbase-site.xml file
 {code}
 property
 namehbase.regionserver.wal.encryption/name
 valuefalse/value
 /property
 property
 namehbase.regionserver.hlog.reader.impl/name
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogReader/value
 /property
 property
 namehbase.regionserver.hlog.writer.impl/name
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogWriter/value
 /property
 {code}
 And while log splitting on hbase service restart, master shutdown with 
 following exception.
 Exception in master log
 {code}
 2014-09-24 17:14:28,590 FATAL [master:host-10-18-40-18:6] master.HMaster: 
 Master server abort: loaded coprocessors are: 
 [org.apache.hadoop.hbase.security.access.AccessController]
 2014-09-24 17:14:28,590 FATAL [master:host-10-18-40-18:6] master.HMaster: 
 Unhandled exception. Starting shutdown.
 java.io.IOException: error or interrupted while splitting logs in 
 [hdfs://10.18.40.18:8020/tmp/hbase-ashish/hbase/WALs/host-10-18-40-18,60020,1411558717849-splitting]
  Task = installed = 6 done = 0 error = 6
 at 
 org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:378)
 at 
 org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:415)
 at 
 org.apache.hadoop.hbase.master.MasterFileSystem.splitMetaLog(MasterFileSystem.java:307)
 at 
 org.apache.hadoop.hbase.master.MasterFileSystem.splitMetaLog(MasterFileSystem.java:298)
 at 
 org.apache.hadoop.hbase.master.HMaster.splitMetaLogBeforeAssignment(HMaster.java:1071)
 at 
 org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:863)
 at 
 org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:612)
 at java.lang.Thread.run(Thread.java:745)
 Exception in region server log
 2014-09-24 20:10:16,535 WARN  [RS_LOG_REPLAY_OPS-host-10-18-40-18:60020-1] 
 regionserver.SplitLogWorker: log splitting of 
 WALs/host-10-18-40-18,60020,1411558717849-splitting/host-10-18-40-18%2C60020%2C1411558717849.1411558724316.meta
  failed, returning error
 java.io.IOException: Cannot get log reader
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:161)
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89)
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:660)
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:569)
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:282)
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:225)
 at 
 org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:143)
 at 
 org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:82)
 at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.UnsupportedOperationException: Unable to find suitable 
 constructor for class 
 org.apache.hadoop.hbase.regionserver.wal.SecureWALCellCodec
 at 
 org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:39)
 at 
 

[jira] [Commented] (HBASE-12065) Import tool is not restoring multiple DeleteFamily markers of a row

2014-10-01 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14155065#comment-14155065
 ] 

Andrew Purtell commented on HBASE-12065:


Will commit shortly.

  Import tool is not restoring multiple DeleteFamily markers of a row
 

 Key: HBASE-12065
 URL: https://issues.apache.org/jira/browse/HBASE-12065
 Project: HBase
  Issue Type: Bug
  Components: util
Affects Versions: 0.98.2
Reporter: Maddineni Sukumar
Assignee: Maddineni Sukumar
Priority: Minor
 Fix For: 2.0.0, 0.98.7, 0.99.1, 0.94.25

 Attachments: hbase-12065-fix-2.patch, hbase-12065-fix.patch, 
 hbase-12065-unit-test.patch


 When a row has more than one DeleteFamily markers, Import tool is not 
 restoring all DeleteFamily markers. 
 Scenario: Insert entries into hbase in below order
 Put Row1 with Value-A
 Delete Row1 with DeleteFamily Marker
 Put Row1 with Value-B
 Delete Row1 with DeleteFamily Marker
 Using Export tool export this data and Import it into another table, you will 
 see below entries
 Delete Row1 with DeleteFamily Marker
 Put Row1 with Value-B
 Put Row1 with Value-A
 One DeleteFamily marker is missing here... In Import tool, 
 Importer.writeResult() method we are batching all deletes into a single 
 Delete request and pushing into hbase. Here we are pushing only one delete 
 family marker into hbase table.
 I tried same with normal HTable.delete command also. 
 If you pass multiple DeleteFamily markers of a row in a single Delete request 
 to hbase then table is maintaining only one. 
 If that is the expected  behavior of hbase then we should change logic in 
 Import tool to push DeleteFamily markers individually one by one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12078) Missing Data when scanning using PREFIX_TREE DATA-BLOCK-ENCODING

2014-10-01 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14155067#comment-14155067
 ] 

Andrew Purtell commented on HBASE-12078:


Will commit shortly

 Missing Data when scanning using PREFIX_TREE DATA-BLOCK-ENCODING
 

 Key: HBASE-12078
 URL: https://issues.apache.org/jira/browse/HBASE-12078
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6.1
 Environment: CentOS 6.3
 hadoop 2.5.0(hdfs)
 hadoop 2.2.0(hbase)
 hbase 0.98.6.1
 sun-jdk 1.7.0_67-b01
Reporter: zhangduo
Assignee: zhangduo
Priority: Critical
 Fix For: 2.0.0, 0.98.7, 0.99.1

 Attachments: HBASE-12078.patch, HBASE-12078_1.patch, 
 prefix_tree_error.patch


 our row key is combined with two ints, and we found that sometimes when we 
 using only the first int part to scan, the result returned may missing some 
 rows. But when we dump the whole hfile, the row is still there.
 We have written a testcase to reproduce the bug. It works like this:
 put 1-12345
 put 12345-0x0100
 put 12345-0x0101
 put 12345-0x0200
 put 12345-0x0202
 put 12345-0x0300
 put 12345-0x0303
 put 12345-0x0400
 put 12345-0x0404
 flush memstore
 then scan using 12345,the returned row key will be 
 12345-0x2000(12345-0x1000 expected)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12039) Lower log level for TableNotFoundException log message when throwing

2014-10-01 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14155070#comment-14155070
 ] 

Andrew Purtell commented on HBASE-12039:


[~lhofhansl], is the above comment an objection?

 Lower log level for TableNotFoundException log message when throwing
 

 Key: HBASE-12039
 URL: https://issues.apache.org/jira/browse/HBASE-12039
 Project: HBase
  Issue Type: Bug
Reporter: James Taylor
Assignee: stack
Priority: Minor
 Fix For: 0.98.7, 0.94.25

 Attachments: 12039-0.94.txt, 12039.txt


 Our HBase client tries to get the HTable descriptor for a table that may or 
 may not exist. We catch and ignore the TableNotFoundException if it occurs, 
 but the log message appear regardless of this which confuses our users. Would 
 it be possible to lower the log level of this message since the exception is 
 already being throw (making it up to the caller how they want to handle this).
 14/09/20 20:01:54 WARN client.HConnectionManager$HConnectionImplementation: 
 Encountered problems when prefetch META table: 
 org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for 
 table: _IDX_TEST.TESTING, row=_IDX_TEST.TESTING,,99
 at 
 org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:151)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:1059)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1121)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1001)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:958)
 at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:251)
 at org.apache.hadoop.hbase.client.HTable.init(HTable.java:243)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12031) Parallel Scanners inside Region

2014-10-01 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12031:
---
Fix Version/s: (was: 0.98.7)
   (was: 1.0.0)
   0.98.8

Moving to 0.98.8

 Parallel Scanners inside Region
 ---

 Key: HBASE-12031
 URL: https://issues.apache.org/jira/browse/HBASE-12031
 Project: HBase
  Issue Type: New Feature
  Components: Performance, Scanners
Affects Versions: 0.98.6
Reporter: Vladimir Rodionov
Assignee: Vladimir Rodionov
 Fix For: 2.0.0, 0.98.8, 0.99.1

 Attachments: HBASE-12031.2.patch, HBASE-12031.3.patch, 
 HBASE-12031.patch, ParallelScannerDesign.pdf, hbase-12031-tests.tar.gz


 This JIRA to improve performance of multiple scanners running on a same 
 region in parallel. The scenarios where we will get the performance benefits:
 * New TableInputFormat with input splits smaller than HBase Region.
 * Scanning during compaction (Compaction scanner and application scanner over 
 the same Region).
 Some JIRAs related to this one:
 https://issues.apache.org/jira/browse/HBASE-7336
 https://issues.apache.org/jira/browse/HBASE-5979 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11973) The url of the token file location set by IntegrationTestImportTsv should point to the localfs

2014-10-01 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14155076#comment-14155076
 ] 

Andrew Purtell commented on HBASE-11973:


Will commit shortly

 The url of the token file location set by IntegrationTestImportTsv should 
 point to the localfs
 --

 Key: HBASE-11973
 URL: https://issues.apache.org/jira/browse/HBASE-11973
 Project: HBase
  Issue Type: Bug
Reporter: Devaraj Das
Assignee: Devaraj Das
 Fix For: 0.98.7

 Attachments: 11973-1.txt


 The location of the token file is on the local filesystem. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11964) Improve spreading replication load from failed regionservers

2014-10-01 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11964:
---
Status: Open  (was: Patch Available)

 Improve spreading replication load from failed regionservers
 

 Key: HBASE-11964
 URL: https://issues.apache.org/jira/browse/HBASE-11964
 Project: HBase
  Issue Type: Sub-task
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 2.0.0, 0.98.7, 0.99.1, 0.94.25

 Attachments: HBASE-11964-0.98.patch, HBASE-11964-0.98.patch, 
 HBASE-11964.patch, HBASE-11964.patch


 Improve replication source thread handling. Improve fanout when transferring 
 queues. Ensure replication sources terminate properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11964) Improve spreading replication load from failed regionservers

2014-10-01 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11964:
---
Status: Patch Available  (was: Open)

 Improve spreading replication load from failed regionservers
 

 Key: HBASE-11964
 URL: https://issues.apache.org/jira/browse/HBASE-11964
 Project: HBase
  Issue Type: Sub-task
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 2.0.0, 0.98.7, 0.99.1, 0.94.25

 Attachments: HBASE-11964-0.98.patch, HBASE-11964-0.98.patch, 
 HBASE-11964.patch, HBASE-11964.patch


 Improve replication source thread handling. Improve fanout when transferring 
 queues. Ensure replication sources terminate properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11907) Use the joni byte[] regex engine in place of j.u.regex in RegexStringComparator

2014-10-01 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14155078#comment-14155078
 ] 

Andrew Purtell commented on HBASE-11907:


Any chance for a review?

 Use the joni byte[] regex engine in place of j.u.regex in 
 RegexStringComparator
 ---

 Key: HBASE-11907
 URL: https://issues.apache.org/jira/browse/HBASE-11907
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 2.0.0, 0.98.7, 0.99.1

 Attachments: HBASE-11907.patch, HBASE-11907.patch


 The joni regex engine (https://github.com/jruby/joni), a Java port of 
 Oniguruma regexp library done by the JRuby project, is:
 - MIT licensed
 - Designed to work with byte[] arguments instead of String
 - Capable of handling UTF8 encoding
 - Regex syntax compatible
 - Interruptible
 - *About twice as fast as j.u.regex*
 - Has JRuby's jcodings library as a dependency, also MIT licensed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   4   >