[jira] [Commented] (HBASE-15338) Add a option to disable the data block cache for testing the performance of underlying file system

2016-03-02 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15177412#comment-15177412
 ] 

Anoop Sam John commented on HBASE-15338:


I was against adding one more config. But only in cache on read area we dont 
have (Other like cache on write and all we have global configs) one, I changed 
my mind..  
A user can not set global level as false and enable it on one HCD. It wont 
honor that.
And what abt enabled on Scan level?  It is not set to true but Scan sets as 
cache we wont cache still right?  Just for confirmation asked.
pls add it clearly in release notes.

> Add a option to disable the data block cache for testing the performance of 
> underlying file system
> --
>
> Key: HBASE-15338
> URL: https://issues.apache.org/jira/browse/HBASE-15338
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15338-trunk-v1.diff, HBASE-15338-trunk-v2.diff, 
> HBASE-15338-trunk-v3.diff, HBASE-15338-trunk-v4.diff, 
> HBASE-15338-trunk-v5.diff, HBASE-15338-trunk-v6.diff
>
>
> When testing and comparing the performance of different file systems(HDFS, 
> Azure blob storage, AWS S3 and so on) for HBase, it's better to avoid the 
> affect of the HBase BlockCache and get the actually random read latency when 
> data block is read from underlying file system. (Usually, the index block and 
> meta block should be cached in memory in the testing).
> So we add a option in CacheConfig to disable the data block cache.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15325) ResultScanner allowing partial result will miss the rest of the row if the region is moved between two rpc requests

2016-03-02 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15177404#comment-15177404
 ] 

Anoop Sam John commented on HBASE-15325:


{code}
// In every RPC response there should be at most a single partial result. 
Furthermore, if
// there is a partial result, it is guaranteed to be in the last position 
of the array.
Result last = resultsFromServer[resultsFromServer.length - 1];
Result partial = last.isPartial() ? last : null;
{code}
Seeing the comment. That also is no longer true if we change the meaning of 
partial. There can be more than one batch in a single RPC response.  Any other 
places we have such consideration?

> ResultScanner allowing partial result will miss the rest of the row if the 
> region is moved between two rpc requests
> ---
>
> Key: HBASE-15325
> URL: https://issues.apache.org/jira/browse/HBASE-15325
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
>Priority: Critical
> Attachments: 15325-test.txt, HBASE-15325-v1.txt, HBASE-15325-v2.txt, 
> HBASE-15325-v3.txt, HBASE-15325-v5.txt, HBASE-15325-v6.1.txt, 
> HBASE-15325-v6.2.txt, HBASE-15325-v6.3.txt, HBASE-15325-v6.4.txt, 
> HBASE-15325-v6.5.txt, HBASE-15325-v6.txt
>
>
> HBASE-11544 allow scan rpc return partial of a row to reduce memory usage for 
> one rpc request. And client can setAllowPartial or setBatch to get several 
> cells in a row instead of the whole row.
> However, the status of the scanner is saved on server and we need this to get 
> the next part if there is a partial result before. If we move the region to 
> another RS, client will get a NotServingRegionException and open a new 
> scanner to the new RS which will be regarded as a new scan from the end of 
> this row. So the rest cells of the row of last result will be missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15338) Add a option to disable the data block cache for testing the performance of underlying file system

2016-03-02 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15177399#comment-15177399
 ] 

Liu Shaohui commented on HBASE-15338:
-

[~anoop.hbase] [~stack] [~jingcheng...@intel.com]
Any other suggestions?
I will commit it tomorrow of no objection. Thanks~

> Add a option to disable the data block cache for testing the performance of 
> underlying file system
> --
>
> Key: HBASE-15338
> URL: https://issues.apache.org/jira/browse/HBASE-15338
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15338-trunk-v1.diff, HBASE-15338-trunk-v2.diff, 
> HBASE-15338-trunk-v3.diff, HBASE-15338-trunk-v4.diff, 
> HBASE-15338-trunk-v5.diff, HBASE-15338-trunk-v6.diff
>
>
> When testing and comparing the performance of different file systems(HDFS, 
> Azure blob storage, AWS S3 and so on) for HBase, it's better to avoid the 
> affect of the HBase BlockCache and get the actually random read latency when 
> data block is read from underlying file system. (Usually, the index block and 
> meta block should be cached in memory in the testing).
> So we add a option in CacheConfig to disable the data block cache.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15376) ScanNext metric is size-based while every other per-operation metric is time based

2016-03-02 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15177354#comment-15177354
 ] 

stack commented on HBASE-15376:
---

Sounds good by me. If it just 'broke', no need to bring it along . Just release 
note it.

> ScanNext metric is size-based while every other per-operation metric is time 
> based
> --
>
> Key: HBASE-15376
> URL: https://issues.apache.org/jira/browse/HBASE-15376
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
> Attachments: HBASE-15376.patch, HBASE-15376_v1.patch
>
>
> We have per-operation metrics for {{Get}}, {{Mutate}}, {{Delete}}, 
> {{Increment}}, and {{ScanNext}}. 
> The metrics are emitted like: 
> {code}
>"Get_num_ops" : 4837505,
> "Get_min" : 0,
> "Get_max" : 296,
> "Get_mean" : 0.2934618155433431,
> "Get_median" : 0.0,
> "Get_75th_percentile" : 0.0,
> "Get_95th_percentile" : 1.0,
> "Get_99th_percentile" : 1.0,
> ...
> "ScanNext_num_ops" : 194705,
> "ScanNext_min" : 0,
> "ScanNext_max" : 18441,
> "ScanNext_mean" : 7468.274651395701,
> "ScanNext_median" : 583.0,
> "ScanNext_75th_percentile" : 583.0,
> "ScanNext_95th_percentile" : 13481.0,
> "ScanNext_99th_percentile" : 13481.0,
> {code}
> The problem is that all of Get,Mutate,Delete,Increment,Append,Replay are time 
> based tracking how long the operation ran, while ScanNext is tracking 
> returned response sizes (returned cell-sizes to be exact). Obviously, this is 
> very confusing and you would only know this subtlety if you read the metrics 
> collection code. 
> Not sure how useful is the ScanNext metric as it is today. We can deprecate 
> it, and introduce a time based one to keep track of scan request latencies. 
> ps. Shamelessly using the parent jira (since these seem relavant). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15329) Cross-Site Scripting: Reflected in table.jsp

2016-03-02 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15177343#comment-15177343
 ] 

stack commented on HBASE-15329:
---

Thanks lads.

> Cross-Site Scripting: Reflected in table.jsp
> 
>
> Key: HBASE-15329
> URL: https://issues.apache.org/jira/browse/HBASE-15329
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Reporter: stack
>Assignee: Samir Ahmic
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15329_v0.patch
>
>
> Minor issue where we write back table name in a few places. Should clean it 
> up:
> {code}
>  } else { 
>   out.write("\nTable: ");
>   out.print( fqtn );
>   out.write("\n");
>  } 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15325) ResultScanner allowing partial result will miss the rest of the row if the region is moved between two rpc requests

2016-03-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15177320#comment-15177320
 ] 

Hadoop QA commented on HBASE-15325:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 2s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/in-progress/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 33s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
35s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 48s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 19s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 
46s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
47s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
37s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 37s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
31m 18s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 36s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 23s 
{color} | {color:green} hbase-common in the patch passed with JDK v1.8.0. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 25s {color} 
| {color:red} hbase-client in the patch failed with JDK v1.8.0. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 33m 7s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.8.0. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 24s 
{color} | {color:green} hbase-common in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 18s {color} 
| {

[jira] [Updated] (HBASE-15368) Add relative window support

2016-03-02 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-15368:
--
Resolution: Later
Status: Resolved  (was: Patch Available)

> Add relative window support
> ---
>
> Key: HBASE-15368
> URL: https://issues.apache.org/jira/browse/HBASE-15368
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HBASE-15368-v1.patch, HBASE-15368.patch
>
>
> To better determine 'hot' data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-9393) Hbase does not closing a closed socket resulting in many CLOSE_WAIT

2016-03-02 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15177302#comment-15177302
 ] 

Ashish Singhi commented on HBASE-9393:
--

Test failures are not related to patch.

> Hbase does not closing a closed socket resulting in many CLOSE_WAIT 
> 
>
> Key: HBASE-9393
> URL: https://issues.apache.org/jira/browse/HBASE-9393
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.2, 0.98.0
> Environment: Centos 6.4 - 7 regionservers/datanodes, 8 TB per node, 
> 7279 regions
>Reporter: Avi Zrachya
>Assignee: Ashish Singhi
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-9393.patch, HBASE-9393.v1.patch, 
> HBASE-9393.v10.patch, HBASE-9393.v11.patch, HBASE-9393.v12.patch, 
> HBASE-9393.v13.patch, HBASE-9393.v14.patch, HBASE-9393.v2.patch, 
> HBASE-9393.v3.patch, HBASE-9393.v4.patch, HBASE-9393.v5.patch, 
> HBASE-9393.v5.patch, HBASE-9393.v5.patch, HBASE-9393.v6.patch, 
> HBASE-9393.v6.patch, HBASE-9393.v6.patch, HBASE-9393.v7.patch, 
> HBASE-9393.v8.patch, HBASE-9393.v9.patch
>
>
> HBase dose not close a dead connection with the datanode.
> This resulting in over 60K CLOSE_WAIT and at some point HBase can not connect 
> to the datanode because too many mapped sockets from one host to another on 
> the same port.
> The example below is with low CLOSE_WAIT count because we had to restart 
> hbase to solve the porblem, later in time it will incease to 60-100K sockets 
> on CLOSE_WAIT
> [root@hd2-region3 ~]# netstat -nap |grep CLOSE_WAIT |grep 21592 |wc -l
> 13156
> [root@hd2-region3 ~]# ps -ef |grep 21592
> root 17255 17219  0 12:26 pts/000:00:00 grep 21592
> hbase21592 1 17 Aug29 ?03:29:06 
> /usr/java/jdk1.6.0_26/bin/java -XX:OnOutOfMemoryError=kill -9 %p -Xmx8000m 
> -ea -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode 
> -Dhbase.log.dir=/var/log/hbase 
> -Dhbase.log.file=hbase-hbase-regionserver-hd2-region3.swnet.corp.log ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15137) CallTimeoutException and CallQueueTooBigException should trigger PFFE

2016-03-02 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-15137:

Status: Open  (was: Patch Available)

> CallTimeoutException and CallQueueTooBigException should trigger PFFE
> -
>
> Key: HBASE-15137
> URL: https://issues.apache.org/jira/browse/HBASE-15137
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Elliott Clark
>Assignee: Mikhail Antonov
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15137.mantonov.patch, HBASE-15137.patch, 
> HBASE-15137.v3.patch, HBASE-15137.v4.patch, HBASE-15137.v5.patch
>
>
> If a region server is backed up enough that lots of calls are timing out then 
> we should think about treating a server as failing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15137) CallTimeoutException and CallQueueTooBigException should trigger PFFE

2016-03-02 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-15137:

Status: Patch Available  (was: Open)

> CallTimeoutException and CallQueueTooBigException should trigger PFFE
> -
>
> Key: HBASE-15137
> URL: https://issues.apache.org/jira/browse/HBASE-15137
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Elliott Clark
>Assignee: Mikhail Antonov
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15137.mantonov.patch, HBASE-15137.patch, 
> HBASE-15137.v3.patch, HBASE-15137.v4.patch, HBASE-15137.v5.patch
>
>
> If a region server is backed up enough that lots of calls are timing out then 
> we should think about treating a server as failing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15137) CallTimeoutException and CallQueueTooBigException should trigger PFFE

2016-03-02 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-15137:

Attachment: HBASE-15137.v5.patch

addressed checkstyles

> CallTimeoutException and CallQueueTooBigException should trigger PFFE
> -
>
> Key: HBASE-15137
> URL: https://issues.apache.org/jira/browse/HBASE-15137
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Elliott Clark
>Assignee: Mikhail Antonov
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15137.mantonov.patch, HBASE-15137.patch, 
> HBASE-15137.v3.patch, HBASE-15137.v4.patch, HBASE-15137.v5.patch
>
>
> If a region server is backed up enough that lots of calls are timing out then 
> we should think about treating a server as failing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15376) ScanNext metric is size-based while every other per-operation metric is time based

2016-03-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15177262#comment-15177262
 ] 

Hadoop QA commented on HBASE-15376:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
20s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 27s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
44s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
38s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 12s 
{color} | {color:red} hbase-hadoop2-compat in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 4m 15s 
{color} | {color:red} Patch generated 1 new checkstyle issues in hbase-server 
(total was 140, now 140). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 17s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 31s 
{color} | {color:green} hbase-hadoop2-compat in the patch passed with JDK 
v1.8.0. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 28s 
{color} | {color:green} hbase-hadoop-compat in the patch passed with JDK 
v1.8.0. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 29m 38s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.8.0. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s 
{color} | {color:green} hbase-hadoop2-compat in the patch passed with JDK 
v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 18s 
{color} | {color:green} hbase-hadoop-compat in the patch passed with JDK 
v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 30m 47s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {col

[jira] [Commented] (HBASE-15295) MutateTableAccess.multiMutate() does not get high priority causing a deadlock

2016-03-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15177232#comment-15177232
 ] 

Hadoop QA commented on HBASE-15295:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 19 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
31s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 45s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 46s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 8m 
2s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
53s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
53s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 36s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 2m 22s 
{color} | {color:red} Patch generated 1 new checkstyle issues in hbase-client 
(total was 532, now 465). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
38m 37s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 52s 
{color} | {color:green} hbase-client in the patch passed with JDK v1.8.0. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 38s 
{color} | {color:green} hbase-it in the patch passed with JDK v1.8.0. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 115m 20s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.8.0. {color} 
|
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 38s 
{color} | {color:green} hbase-client in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 32s 
{color} | {color:green} hbase-it in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 138m 7s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 346m 54s {color} 
| {color:black} {color}

[jira] [Commented] (HBASE-15379) Fake cells created in read path not implementing SettableSequenceId

2016-03-02 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15177201#comment-15177201
 ] 

ramkrishna.s.vasudevan commented on HBASE-15379:


Ya okie to commit without test also. 

> Fake cells created in read path not implementing SettableSequenceId
> ---
>
> Key: HBASE-15379
> URL: https://issues.apache.org/jira/browse/HBASE-15379
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Amal Joshy
> Fix For: 2.0.0
>
> Attachments: HBASE-15379-v2.patch, HBASE-15379.patch
>
>
> This issue found by [~appy]. In HBASE-14099  he says,
> I was doing some testing when I hit a weird issue, seems related to this, so 
> re-opening it (apologies in advance if it's not).  Here's the stack trace
> {noformat}
> java.io.IOException: java.lang.UnsupportedOperationException: Cell is not of 
> type org.apache.hadoop.hbase.SettableSequenceId
>   at org.apache.hadoop.hbase.CellUtil.setSequenceId(CellUtil.java:923)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.setCurrentCell(StoreFileScanner.java:231)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.requestSeek(StoreFileScanner.java:389)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:348)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:212)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createScanner(HStore.java:1873)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1863)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5487)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2577)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2563)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2544)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2534)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6659)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6624)
>   at 
> org.apache.hadoop.hbase.regionserver.TestWithSingleHRegion.test(TestWithSingleHRegion.java:48)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:117)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:234)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:74)
> {noformat}
> I think it's because of using changing from KeyValue to a different sub-class 
> of {{Cell}}l which doesn't implement {{SettableSequenceId}}
> {noformat}
> -this.startKey = 
> KeyValueUtil.createFirstDeleteFamilyOnRow(scan.getStartRow(),
> +this.startKey = 
> CellUtil.createFirstDeleteFamilyCellOnRow(scan.getStartRow(),
> {noformat}
> To replicate it, download the attached hfiles somewhere, copy the 
> TestWithSingleHRegion class to regionserver tests, change the ROOT_DIR 
> appropriately and run it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15379) Fake cells created in read path not implementing SettableSequenceId

2016-03-02 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15177179#comment-15177179
 ] 

Anoop Sam John commented on HBASE-15379:


Looks like..  Ya even the seqId is of no use..  Actually we can avoid the call 
to setSeqId on these fake cell at that time..  But as said above, it is added 
as a safer side.. Later also we may land in this I fear. IMHO with out test 
also we can get this in.

> Fake cells created in read path not implementing SettableSequenceId
> ---
>
> Key: HBASE-15379
> URL: https://issues.apache.org/jira/browse/HBASE-15379
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Amal Joshy
> Fix For: 2.0.0
>
> Attachments: HBASE-15379-v2.patch, HBASE-15379.patch
>
>
> This issue found by [~appy]. In HBASE-14099  he says,
> I was doing some testing when I hit a weird issue, seems related to this, so 
> re-opening it (apologies in advance if it's not).  Here's the stack trace
> {noformat}
> java.io.IOException: java.lang.UnsupportedOperationException: Cell is not of 
> type org.apache.hadoop.hbase.SettableSequenceId
>   at org.apache.hadoop.hbase.CellUtil.setSequenceId(CellUtil.java:923)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.setCurrentCell(StoreFileScanner.java:231)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.requestSeek(StoreFileScanner.java:389)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:348)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:212)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createScanner(HStore.java:1873)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1863)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5487)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2577)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2563)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2544)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2534)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6659)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6624)
>   at 
> org.apache.hadoop.hbase.regionserver.TestWithSingleHRegion.test(TestWithSingleHRegion.java:48)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:117)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:234)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:74)
> {noformat}
> I think it's because of using changing from KeyValue to a different sub-class 
> of {{Cell}}l which doesn't implement {{SettableSequenceId}}
> {noformat}
> -this.startKey = 
> KeyValueUtil.createFirstDeleteFamilyOnRow(scan.getStartRow(),
> +this.startKey = 
> CellUtil.createFirstDeleteFamilyCellOnRow(scan.getStartRow(),
> {noformat}
> To replicate it, download the attached hfiles somewhere, copy the 
> TestWithSingleHR

[jira] [Commented] (HBASE-15378) Scanner can not handle a heartbeat message with no results

2016-03-02 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15177173#comment-15177173
 ] 

Phil Yang commented on HBASE-15378:
---

The patch can be committed to branch-1.1/1.2/1.3/1 and master. Any ideas? 
Thanks :)

> Scanner can not handle a heartbeat message with no results
> --
>
> Key: HBASE-15378
> URL: https://issues.apache.org/jira/browse/HBASE-15378
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
>Priority: Critical
> Attachments: HBASE-15378-v1.txt, HBASE-15378-v2.txt, 
> HBASE-15378-v3.txt
>
>
> When a RS scanner get a TIME_LIMIT_REACHED_MID_ROW state, they will stop 
> scanning and send back what it has read to client and mark the message as a 
> heartbeat message. If there is no cell has been read, it will be an empty 
> response. 
> However, ClientScanner only handles the situation that the client gets an 
> empty heartbeat and its cache is not empty. If the cache is empty too, it 
> will be regarded as end-of-region and open a new scanner for next region.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15325) ResultScanner allowing partial result will miss the rest of the row if the region is moved between two rpc requests

2016-03-02 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15177157#comment-15177157
 ] 

Phil Yang commented on HBASE-15325:
---

Thanks Ted, the last patch is to test Yetus environment only so it must fail. 
I'll draft a new patch after code review done and HBASE-15378 committed to 
master because two patches may conflict in one line.

> ResultScanner allowing partial result will miss the rest of the row if the 
> region is moved between two rpc requests
> ---
>
> Key: HBASE-15325
> URL: https://issues.apache.org/jira/browse/HBASE-15325
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
>Priority: Critical
> Attachments: 15325-test.txt, HBASE-15325-v1.txt, HBASE-15325-v2.txt, 
> HBASE-15325-v3.txt, HBASE-15325-v5.txt, HBASE-15325-v6.1.txt, 
> HBASE-15325-v6.2.txt, HBASE-15325-v6.3.txt, HBASE-15325-v6.4.txt, 
> HBASE-15325-v6.5.txt, HBASE-15325-v6.txt
>
>
> HBASE-11544 allow scan rpc return partial of a row to reduce memory usage for 
> one rpc request. And client can setAllowPartial or setBatch to get several 
> cells in a row instead of the whole row.
> However, the status of the scanner is saved on server and we need this to get 
> the next part if there is a partial result before. If we move the region to 
> another RS, client will get a NotServingRegionException and open a new 
> scanner to the new RS which will be regarded as a new scan from the end of 
> this row. So the rest cells of the row of last result will be missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15379) Fake cells created in read path not implementing SettableSequenceId

2016-03-02 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15177155#comment-15177155
 ] 

ramkrishna.s.vasudevan commented on HBASE-15379:


+1 on patch. If possible we can add test case. Better. Not sure on the exact 
scenario. Does this happen only during bulk load?

> Fake cells created in read path not implementing SettableSequenceId
> ---
>
> Key: HBASE-15379
> URL: https://issues.apache.org/jira/browse/HBASE-15379
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Amal Joshy
> Fix For: 2.0.0
>
> Attachments: HBASE-15379-v2.patch, HBASE-15379.patch
>
>
> This issue found by [~appy]. In HBASE-14099  he says,
> I was doing some testing when I hit a weird issue, seems related to this, so 
> re-opening it (apologies in advance if it's not).  Here's the stack trace
> {noformat}
> java.io.IOException: java.lang.UnsupportedOperationException: Cell is not of 
> type org.apache.hadoop.hbase.SettableSequenceId
>   at org.apache.hadoop.hbase.CellUtil.setSequenceId(CellUtil.java:923)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.setCurrentCell(StoreFileScanner.java:231)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.requestSeek(StoreFileScanner.java:389)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:348)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:212)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createScanner(HStore.java:1873)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1863)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5487)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2577)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2563)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2544)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2534)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6659)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6624)
>   at 
> org.apache.hadoop.hbase.regionserver.TestWithSingleHRegion.test(TestWithSingleHRegion.java:48)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:117)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:234)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:74)
> {noformat}
> I think it's because of using changing from KeyValue to a different sub-class 
> of {{Cell}}l which doesn't implement {{SettableSequenceId}}
> {noformat}
> -this.startKey = 
> KeyValueUtil.createFirstDeleteFamilyOnRow(scan.getStartRow(),
> +this.startKey = 
> CellUtil.createFirstDeleteFamilyCellOnRow(scan.getStartRow(),
> {noformat}
> To replicate it, download the attached hfiles somewhere, copy the 
> TestWithSingleHRegion class to regionserver tests, change the ROOT_DIR 
> appropriately and run it.



--
This message was sent by At

[jira] [Resolved] (HBASE-15329) Cross-Site Scripting: Reflected in table.jsp

2016-03-02 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen resolved HBASE-15329.
---
  Resolution: Fixed
Hadoop Flags: Reviewed

> Cross-Site Scripting: Reflected in table.jsp
> 
>
> Key: HBASE-15329
> URL: https://issues.apache.org/jira/browse/HBASE-15329
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Reporter: stack
>Assignee: Samir Ahmic
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15329_v0.patch
>
>
> Minor issue where we write back table name in a few places. Should clean it 
> up:
> {code}
>  } else { 
>   out.write("\nTable: ");
>   out.print( fqtn );
>   out.write("\n");
>  } 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15329) Cross-Site Scripting: Reflected in table.jsp

2016-03-02 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15177136#comment-15177136
 ] 

Heng Chen commented on HBASE-15329:
---

Actually,  it is not a problem.  If your url parameter 'name' is not a valid 
table name in cluster, it will return 500 like below.  And the table's name is 
only valid with [a-zA-Z_0-9-.].   But i will still push it just for unified 
code with XSS dealing.   

{code}
HTTP ERROR 500

Problem accessing /table.jsp. Reason:

Illegal character code:38, <&> at 0. User-space table qualifiers can only 
contain 'alphanumeric characters': i.e. [a-zA-Z_0-9-.]: