[jira] [Commented] (HBASE-10656) high-scale-lib's Counter depends on Oracle (Sun) JRE, and also has some bug

2014-03-04 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919111#comment-13919111
 ] 

Andrew Purtell commented on HBASE-10656:


bq. We should have a fallback implementation as you suggest.

+1

  high-scale-lib's Counter depends on Oracle (Sun) JRE, and also has some bug
 

 Key: HBASE-10656
 URL: https://issues.apache.org/jira/browse/HBASE-10656
 Project: HBase
  Issue Type: Bug
Reporter: Hiroshi Ikeda
Priority: Minor
 Attachments: MyCounter.java, MyCounterTest.java


 Cliff's high-scale-lib's Counter is used in important classes (for example, 
 HRegion) in HBase, but Counter uses sun.misc.Unsafe, that is implementation 
 detail of the Java standard library and belongs to Oracle (Sun). That 
 consequently makes HBase depend on the specific JRE Implementation.
 To make matters worse, Counter has a bug and you may get wrong result if you 
 mix a reading method into your logic calling writing methods.
 In more detail, I think the bug is caused by reading an internal array field 
 without resolving memory caching, which is intentional the comment says, but 
 storing the read result into a volatile field. That field may be not changed 
 after you can see the true values of the array field, and also may be not 
 changed after updating the next CAT instance's values in some race 
 condition when extending CAT instance chain.
 Anyway, it is possible that you create a new alternative class which only 
 depends on the standard library. I know Java8 provides its alternative, but 
 HBase should support Java6 and Java7 for some time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10648) Pluggable Memstore

2014-03-04 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919117#comment-13919117
 ] 

Andrew Purtell commented on HBASE-10648:


Static class in an interface is weird? Just me?

{code}
+public interface MemStore extends HeapSize {
...
 static class SnapshotInfo {
...
{code}

Can we extract the MemstoreLAB to an interface and default implementation also? 
Maybe as a follow on issue?

 Pluggable Memstore
 --

 Key: HBASE-10648
 URL: https://issues.apache.org/jira/browse/HBASE-10648
 Project: HBase
  Issue Type: Sub-task
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: HBASE-10648.patch


 Make Memstore into an interface implementation.  Also make it pluggable by 
 configuring the FQCN of the impl.
 This will allow us to have different impl and optimizations in the Memstore 
 DataStructure and the upper layers untouched.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10625) Remove unnecessary key compare from AbstractScannerV2.reseekTo

2014-03-04 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10625:
--

Attachment: 10625-trunk-experimental.txt

 Remove unnecessary key compare from AbstractScannerV2.reseekTo
 --

 Key: HBASE-10625
 URL: https://issues.apache.org/jira/browse/HBASE-10625
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
 Attachments: 10625-0.94-experimental.txt, 10625-0.94.txt, 
 10625-trunk-experimental.txt, 10625-trunk.txt


 In reseekTo we find this
 {code}
 ...
 compared = compareKey(reader.getComparator(), key, offset, length);
 if (compared  1) {
   // If the required key is less than or equal to current key, then
   // don't do anything.
   return compared;
 } else {
...
return loadBlockAndSeekToKey(this.block, this.nextIndexedKey,
   false, key, offset, length, false);
 ...
 {code}
 loadBlockAndSeekToKey already does the right thing when a we pass a key that 
 sorts before the current key. It's less efficient than this early check, but 
 in the vast (all?) cases we pass forward keys (as required by the reseek 
 contract). We're optimizing the wrong thing.
 Scanning with the ExplicitColumnTracker is 20-30% faster.
 (I tested with rows of 5 short KVs selected the 2nd and or 4th column)
 I propose simply removing that check.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10625) Remove unnecessary key compare from AbstractScannerV2.reseekTo

2014-03-04 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10625:
--

Status: Patch Available  (was: Reopened)

Curious how many tests this would break. Let's try.

 Remove unnecessary key compare from AbstractScannerV2.reseekTo
 --

 Key: HBASE-10625
 URL: https://issues.apache.org/jira/browse/HBASE-10625
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
 Attachments: 10625-0.94-experimental.txt, 10625-0.94.txt, 
 10625-trunk-experimental.txt, 10625-trunk.txt


 In reseekTo we find this
 {code}
 ...
 compared = compareKey(reader.getComparator(), key, offset, length);
 if (compared  1) {
   // If the required key is less than or equal to current key, then
   // don't do anything.
   return compared;
 } else {
...
return loadBlockAndSeekToKey(this.block, this.nextIndexedKey,
   false, key, offset, length, false);
 ...
 {code}
 loadBlockAndSeekToKey already does the right thing when a we pass a key that 
 sorts before the current key. It's less efficient than this early check, but 
 in the vast (all?) cases we pass forward keys (as required by the reseek 
 contract). We're optimizing the wrong thing.
 Scanning with the ExplicitColumnTracker is 20-30% faster.
 (I tested with rows of 5 short KVs selected the 2nd and or 4th column)
 I propose simply removing that check.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Reopened] (HBASE-10625) Remove unnecessary key compare from AbstractScannerV2.reseekTo

2014-03-04 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl reopened HBASE-10625:
---


 Remove unnecessary key compare from AbstractScannerV2.reseekTo
 --

 Key: HBASE-10625
 URL: https://issues.apache.org/jira/browse/HBASE-10625
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
 Attachments: 10625-0.94-experimental.txt, 10625-0.94.txt, 
 10625-trunk-experimental.txt, 10625-trunk.txt


 In reseekTo we find this
 {code}
 ...
 compared = compareKey(reader.getComparator(), key, offset, length);
 if (compared  1) {
   // If the required key is less than or equal to current key, then
   // don't do anything.
   return compared;
 } else {
...
return loadBlockAndSeekToKey(this.block, this.nextIndexedKey,
   false, key, offset, length, false);
 ...
 {code}
 loadBlockAndSeekToKey already does the right thing when a we pass a key that 
 sorts before the current key. It's less efficient than this early check, but 
 in the vast (all?) cases we pass forward keys (as required by the reseek 
 contract). We're optimizing the wrong thing.
 Scanning with the ExplicitColumnTracker is 20-30% faster.
 (I tested with rows of 5 short KVs selected the 2nd and or 4th column)
 I propose simply removing that check.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10656) high-scale-lib's Counter depends on Oracle (Sun) JRE, and also has some bug

2014-03-04 Thread Hiroshi Ikeda (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hiroshi Ikeda updated HBASE-10656:
--

Attachment: MyCounterTest.java

Added a revised test. Sorry, the previously added test prints wrong consumed 
time. I created non-blocking logic, which should cause few context switches, 
and I should have woken up at first the thread recording its start time.

Avoiding cache-line contention is one of the key words to improve performance, 
but ironically we only detect it via failure of CAS with intentionally 
colliding accesses. I think the size of cache-line differ from environments, 
and if the estimate is small the chance of the detection reduces. For my 
environment it seems the size of cache-line is 64bit * 8 and setting 
MyCounter.Cat.CACHE_LINE_SCALE to 4 works well.

I used a well-spreaded hashcode using Thread.getId() under the assumption that 
the ID is sequentially increasing, but that therefore reduces the chance of 
failure of CAS, and that is bad if estimate size of the cache-line is small. 
That's a dilemma.


  high-scale-lib's Counter depends on Oracle (Sun) JRE, and also has some bug
 

 Key: HBASE-10656
 URL: https://issues.apache.org/jira/browse/HBASE-10656
 Project: HBase
  Issue Type: Bug
Reporter: Hiroshi Ikeda
Priority: Minor
 Attachments: MyCounter.java, MyCounterTest.java, MyCounterTest.java


 Cliff's high-scale-lib's Counter is used in important classes (for example, 
 HRegion) in HBase, but Counter uses sun.misc.Unsafe, that is implementation 
 detail of the Java standard library and belongs to Oracle (Sun). That 
 consequently makes HBase depend on the specific JRE Implementation.
 To make matters worse, Counter has a bug and you may get wrong result if you 
 mix a reading method into your logic calling writing methods.
 In more detail, I think the bug is caused by reading an internal array field 
 without resolving memory caching, which is intentional the comment says, but 
 storing the read result into a volatile field. That field may be not changed 
 after you can see the true values of the array field, and also may be not 
 changed after updating the next CAT instance's values in some race 
 condition when extending CAT instance chain.
 Anyway, it is possible that you create a new alternative class which only 
 depends on the standard library. I know Java8 provides its alternative, but 
 HBase should support Java6 and Java7 for some time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10656) high-scale-lib's Counter depends on Oracle (Sun) JRE, and also has some bug

2014-03-04 Thread Hiroshi Ikeda (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919140#comment-13919140
 ] 

Hiroshi Ikeda commented on HBASE-10656:
---

bq. MyCounter.Cat.CACHE_LINE_SCALE to 4

Sorry, MyCounter.Cat.CACHE_LINE_SCALE to 3 is correct because 8 = 1  3


  high-scale-lib's Counter depends on Oracle (Sun) JRE, and also has some bug
 

 Key: HBASE-10656
 URL: https://issues.apache.org/jira/browse/HBASE-10656
 Project: HBase
  Issue Type: Bug
Reporter: Hiroshi Ikeda
Priority: Minor
 Attachments: MyCounter.java, MyCounterTest.java, MyCounterTest.java


 Cliff's high-scale-lib's Counter is used in important classes (for example, 
 HRegion) in HBase, but Counter uses sun.misc.Unsafe, that is implementation 
 detail of the Java standard library and belongs to Oracle (Sun). That 
 consequently makes HBase depend on the specific JRE Implementation.
 To make matters worse, Counter has a bug and you may get wrong result if you 
 mix a reading method into your logic calling writing methods.
 In more detail, I think the bug is caused by reading an internal array field 
 without resolving memory caching, which is intentional the comment says, but 
 storing the read result into a volatile field. That field may be not changed 
 after you can see the true values of the array field, and also may be not 
 changed after updating the next CAT instance's values in some race 
 condition when extending CAT instance chain.
 Anyway, it is possible that you create a new alternative class which only 
 depends on the standard library. I know Java8 provides its alternative, but 
 HBase should support Java6 and Java7 for some time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10549) when there is a hole,LoadIncrementalHFiles will hung up in an infinite loop.

2014-03-04 Thread yuanxinen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuanxinen updated HBASE-10549:
--

Status: Open  (was: Patch Available)

 when there is a hole,LoadIncrementalHFiles will hung up in an infinite loop.
 

 Key: HBASE-10549
 URL: https://issues.apache.org/jira/browse/HBASE-10549
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.94.11
Reporter: yuanxinen
 Fix For: 0.99.0






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10652) Fix incorrect handling of IE that restores current thread's interrupt status within while/for loops in rpc

2014-03-04 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10652:


Fix Version/s: 0.99.0

 Fix incorrect handling of IE that restores current thread's interrupt status 
 within while/for loops in rpc
 --

 Key: HBASE-10652
 URL: https://issues.apache.org/jira/browse/HBASE-10652
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Replication
Reporter: Feng Honghua
Assignee: Feng Honghua
Priority: Minor
 Fix For: 0.99.0

 Attachments: HBASE-10652-trunk_v1.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10652) Fix incorrect handling of IE that restores current thread's interrupt status within while/for loops in rpc

2014-03-04 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10652:


  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to trunk, thanks for the patch!

 Fix incorrect handling of IE that restores current thread's interrupt status 
 within while/for loops in rpc
 --

 Key: HBASE-10652
 URL: https://issues.apache.org/jira/browse/HBASE-10652
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Replication
Reporter: Feng Honghua
Assignee: Feng Honghua
Priority: Minor
 Fix For: 0.99.0

 Attachments: HBASE-10652-trunk_v1.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10650) Fix incorrect handling of IE that restores current thread's interrupt status within while/for loops in RegionServer

2014-03-04 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10650:


   Resolution: Fixed
Fix Version/s: 0.99.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I executed the tests locally, it worked.
Committed to trunk.
Thanks for the patch, Feng.

 Fix incorrect handling of IE that restores current thread's interrupt status 
 within while/for loops in RegionServer
 ---

 Key: HBASE-10650
 URL: https://issues.apache.org/jira/browse/HBASE-10650
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver
Reporter: Feng Honghua
Assignee: Feng Honghua
 Fix For: 0.99.0

 Attachments: HBASE-10650-trunk_v1.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-9999) Add support for small reverse scan

2014-03-04 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-:
---

Attachment: .v3.patch

 Add support for small reverse scan
 --

 Key: HBASE-
 URL: https://issues.apache.org/jira/browse/HBASE-
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Nicolas Liochon
 Attachments: .v1.patch, .v2.patch, .v3.patch


 HBASE-4811 adds feature of reverse scan. This JIRA adds the support for small 
 reverse scan.
 This is activated when both 'reversed' and 'small' attributes are true in 
 Scan Object



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9999) Add support for small reverse scan

2014-03-04 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919158#comment-13919158
 ] 

Nicolas Liochon commented on HBASE-:


v3 contains the doc changes identified by Ted. Commit is under way.

 Add support for small reverse scan
 --

 Key: HBASE-
 URL: https://issues.apache.org/jira/browse/HBASE-
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Nicolas Liochon
 Attachments: .v1.patch, .v2.patch, .v3.patch


 HBASE-4811 adds feature of reverse scan. This JIRA adds the support for small 
 reverse scan.
 This is activated when both 'reversed' and 'small' attributes are true in 
 Scan Object



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10662) RegionScanner should be closed and according lease should be cancelled in regionserver immediately if the related region has been re-opened during performing scan reques

2014-03-04 Thread Feng Honghua (JIRA)
Feng Honghua created HBASE-10662:


 Summary: RegionScanner should be closed and according lease should 
be cancelled in regionserver immediately if the related region has been 
re-opened during performing scan request
 Key: HBASE-10662
 URL: https://issues.apache.org/jira/browse/HBASE-10662
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Feng Honghua
Assignee: Feng Honghua


During regionserver processes scan request from client, it fails the request by 
throwing a wrapped NotServingRegionException to client if it finds the region 
related to the passed-in scanner-id has been re-opened, and it also removes the 
RegionScannerHolder from the scanners. In fact under this case, the old and 
invalid RegionScanner related to the passed-in scanner-id should be closed and 
the related lease should be cancelled at the mean time as well.

Currently region's related scanners aren't closed when closing the region, a 
region scanner is closed only when requested explicitly by client, or by 
expiration of the related lease, in this sense the close of region scanners is 
quite passive and lag.

Sounds reasonable to cleanup all related scanners and cancel these scanners' 
leases after closing a region?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10662) RegionScanner should be closed and according lease should be cancelled in regionserver immediately if we find the related region has been re-opened during performing sca

2014-03-04 Thread Feng Honghua (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Honghua updated HBASE-10662:
-

Summary: RegionScanner should be closed and according lease should be 
cancelled in regionserver immediately if we find the related region has been 
re-opened during performing scan request  (was: RegionScanner should be closed 
and according lease should be cancelled in regionserver immediately if the 
related region has been re-opened during performing scan request)

 RegionScanner should be closed and according lease should be cancelled in 
 regionserver immediately if we find the related region has been re-opened 
 during performing scan request
 --

 Key: HBASE-10662
 URL: https://issues.apache.org/jira/browse/HBASE-10662
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Feng Honghua
Assignee: Feng Honghua
 Attachments: HBASE-10662-trunk_v1.patch


 During regionserver processes scan request from client, it fails the request 
 by throwing a wrapped NotServingRegionException to client if it finds the 
 region related to the passed-in scanner-id has been re-opened, and it also 
 removes the RegionScannerHolder from the scanners. In fact under this case, 
 the old and invalid RegionScanner related to the passed-in scanner-id should 
 be closed and the related lease should be cancelled at the mean time as well.
 Currently region's related scanners aren't closed when closing the region, a 
 region scanner is closed only when requested explicitly by client, or by 
 expiration of the related lease, in this sense the close of region scanners 
 is quite passive and lag.
 Sounds reasonable to cleanup all related scanners and cancel these scanners' 
 leases after closing a region?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10662) RegionScanner should be closed and according lease should be cancelled in regionserver immediately if the related region has been re-opened during performing scan reques

2014-03-04 Thread Feng Honghua (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Honghua updated HBASE-10662:
-

Attachment: HBASE-10662-trunk_v1.patch

Patch with an immediate fix attached

Since no valid region for such a stale and invalid region scanner, so no 
according coprocessor calls such as 
region.getCoprocessorHost().preScannerClose(scanner) or 
region.getCoprocessorHost().postScannerClose(scanner)

 RegionScanner should be closed and according lease should be cancelled in 
 regionserver immediately if the related region has been re-opened during 
 performing scan request
 --

 Key: HBASE-10662
 URL: https://issues.apache.org/jira/browse/HBASE-10662
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Feng Honghua
Assignee: Feng Honghua
 Attachments: HBASE-10662-trunk_v1.patch


 During regionserver processes scan request from client, it fails the request 
 by throwing a wrapped NotServingRegionException to client if it finds the 
 region related to the passed-in scanner-id has been re-opened, and it also 
 removes the RegionScannerHolder from the scanners. In fact under this case, 
 the old and invalid RegionScanner related to the passed-in scanner-id should 
 be closed and the related lease should be cancelled at the mean time as well.
 Currently region's related scanners aren't closed when closing the region, a 
 region scanner is closed only when requested explicitly by client, or by 
 expiration of the related lease, in this sense the close of region scanners 
 is quite passive and lag.
 Sounds reasonable to cleanup all related scanners and cancel these scanners' 
 leases after closing a region?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10018) Change the location prefetch

2014-03-04 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919171#comment-13919171
 ] 

Nicolas Liochon commented on HBASE-10018:
-

I've just committed HBASE-. This patch is ready to go.
I see two options:
1) As it is, i.e. full removal of the features, interfaces kept deprecated for 
backward compatibility but they do nothing
2) Change the patch to keep the prefetch as an option, deactivated by default.

I've got a small preference for 1), but I don't mind doing 2).
The reason for 2) would be that if there is a performance degradation for some 
use cases, we can use the option to to keep us safe.
My reasons for prefering 1) are:
- I remove much more code this way: with 2, I'm unclear about what to do with 
the existing code to put/remove a table in the to prefect list.
- It's often better to have an optimized code path rather than 2 average.
- an option non activated will become more or less obsolete or buggy very 
quickly, so it may not help that much...

As you like guys :-)

 Change the location prefetch
 

 Key: HBASE-10018
 URL: https://issues.apache.org/jira/browse/HBASE-10018
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10018.v1.patch, 10018.v2.patch, 10018.v4.patch, 
 10018v3.patch


 Issues with prefetching are:
 - we do two calls to meta: one for the exact row, one for the prefetch 
 - it's done in a lock
 - we take the next 10 regions. Why 10, why the 10 next?
 - is it useful if the table has 100K regions?
 Options are:
 - just remove it
 - replace it with a reverse scan: this would save a call
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-9999) Add support for small reverse scan

2014-03-04 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-:
---

   Resolution: Fixed
Fix Version/s: 0.99.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed, thanks for the review, Ted, Stack and Enis!

 Add support for small reverse scan
 --

 Key: HBASE-
 URL: https://issues.apache.org/jira/browse/HBASE-
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: .v1.patch, .v2.patch, .v3.patch


 HBASE-4811 adds feature of reverse scan. This JIRA adds the support for small 
 reverse scan.
 This is activated when both 'reversed' and 'small' attributes are true in 
 Scan Object



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-9708) Improve Snapshot Name Error Message

2014-03-04 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-9708:
---

   Resolution: Fixed
Fix Version/s: 0.94.18
   0.99.0
   0.98.1
   0.96.2
 Assignee: Matteo Bertozzi
   Status: Resolved  (was: Patch Available)

 Improve Snapshot Name Error Message
 ---

 Key: HBASE-9708
 URL: https://issues.apache.org/jira/browse/HBASE-9708
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.94.2
Reporter: Jesse Anderson
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-9708.1.patch, HBASE-9708.1.patch


 The output for snapshots when you enter an invalid snapshot name talks about 
 User-space table names instead of Snapshot names. The error message 
 should say Snapshot names can only contain
 Here is an example of the output:
 {noformat}
 hbase(main):001:0 snapshot 'user', 'asdf asdf'
 ERROR: java.lang.IllegalArgumentException: Illegal character 32 at 4. 
 User-space table names can only contain 'word characters': i.e. 
 [a-zA-Z_0-9-.]: asdf asdf
 Here is some help for this command:
 Take a snapshot of specified table. Examples:
   hbase snapshot 'sourceTable', 'snapshotName'
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10549) when there is a hole,LoadIncrementalHFiles will hung up in an infinite loop.

2014-03-04 Thread yuanxinen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuanxinen updated HBASE-10549:
--

Attachment: HBASE-10549-trunk.patch

Here is the patch for trunk.
In the patch checking for any region holes and throwing exception if any.
Added test case to reproduce the issue and its passing with the fix.

Please review.

 when there is a hole,LoadIncrementalHFiles will hung up in an infinite loop.
 

 Key: HBASE-10549
 URL: https://issues.apache.org/jira/browse/HBASE-10549
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.94.11
Reporter: yuanxinen
 Fix For: 0.99.0

 Attachments: HBASE-10549-trunk.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10549) when there is a hole,LoadIncrementalHFiles will hung up in an infinite loop.

2014-03-04 Thread yuanxinen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuanxinen updated HBASE-10549:
--

Fix Version/s: 0.94.18
   0.98.1
   0.96.2
   Status: Patch Available  (was: Open)

 when there is a hole,LoadIncrementalHFiles will hung up in an infinite loop.
 

 Key: HBASE-10549
 URL: https://issues.apache.org/jira/browse/HBASE-10549
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.94.11
Reporter: yuanxinen
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-10549-trunk.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10549) when there is a hole,LoadIncrementalHFiles will hung up in an infinite loop.

2014-03-04 Thread yuanxinen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuanxinen updated HBASE-10549:
--

Component/s: (was: HFile)

 when there is a hole,LoadIncrementalHFiles will hung up in an infinite loop.
 

 Key: HBASE-10549
 URL: https://issues.apache.org/jira/browse/HBASE-10549
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.11
Reporter: yuanxinen
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-10549-trunk.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10549) when there is a hole,LoadIncrementalHFiles will hung up in an infinite loop.

2014-03-04 Thread yuanxinen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuanxinen updated HBASE-10549:
--

Description: 
First,I explan my test steps.
1.importtsv
2.split the region
3.delete the region info from .META.(make a hole)
4.LoadIncrementalHFiles (this step will hung up in an infinite loop)
I check the log,there are two issues
1.it create _tmp folder in an infinite loop.
hdfs://hacluster/output3/i/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/test_table,136.bottom
2.when slpliting the hfile,it put the first line data(1211) into two files(top 
and bottom)
Input 
File=hdfs://hacluster/output3/i/3ac6ec287c644a8fb72d96b13e31f576,outFile=hdfs://hacluster/output3/i/_tmp/test_table,2.top,KeyValue=1211/i:value/1390469306407/Put/vlen=1/ts=0
Input 
File=hdfs://hacluster/output3/i/3ac6ec287c644a8fb72d96b13e31f576,outFile=hdfs://hacluster/output3/i/_tmp/test_table,2.bottom,KeyValue=1211/i:value/1390469306407/Put/vlen=1/ts=0
and then I check the code.
So I think before spliting the hfile,we should check the consistency of 
startkey and endkey,if something wrong,we should throw the exception,and stop 
LoadIncrementalHFiles.


 when there is a hole,LoadIncrementalHFiles will hung up in an infinite loop.
 

 Key: HBASE-10549
 URL: https://issues.apache.org/jira/browse/HBASE-10549
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.94.11
Reporter: yuanxinen
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-10549-trunk.patch


 First,I explan my test steps.
 1.importtsv
 2.split the region
 3.delete the region info from .META.(make a hole)
 4.LoadIncrementalHFiles (this step will hung up in an infinite loop)
 I check the log,there are two issues
 1.it create _tmp folder in an infinite loop.
 hdfs://hacluster/output3/i/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/test_table,136.bottom
 2.when slpliting the hfile,it put the first line data(1211) into two 
 files(top and bottom)
 Input 
 File=hdfs://hacluster/output3/i/3ac6ec287c644a8fb72d96b13e31f576,outFile=hdfs://hacluster/output3/i/_tmp/test_table,2.top,KeyValue=1211/i:value/1390469306407/Put/vlen=1/ts=0
 Input 
 File=hdfs://hacluster/output3/i/3ac6ec287c644a8fb72d96b13e31f576,outFile=hdfs://hacluster/output3/i/_tmp/test_table,2.bottom,KeyValue=1211/i:value/1390469306407/Put/vlen=1/ts=0
 and then I check the code.
 So I think before spliting the hfile,we should check the consistency of 
 startkey and endkey,if something wrong,we should throw the exception,and stop 
 LoadIncrementalHFiles.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10549) when there is a hole,LoadIncrementalHFiles will hung up in an infinite loop.

2014-03-04 Thread yuanxinen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuanxinen updated HBASE-10549:
--

Component/s: HFile

 when there is a hole,LoadIncrementalHFiles will hung up in an infinite loop.
 

 Key: HBASE-10549
 URL: https://issues.apache.org/jira/browse/HBASE-10549
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.94.11
Reporter: yuanxinen
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-10549-trunk.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9999) Add support for small reverse scan

2014-03-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919200#comment-13919200
 ] 

Hadoop QA commented on HBASE-:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12632476/.v3.patch
  against trunk revision .
  ATTACHMENT ID: 12632476

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8875//console

This message is automatically generated.

 Add support for small reverse scan
 --

 Key: HBASE-
 URL: https://issues.apache.org/jira/browse/HBASE-
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: .v1.patch, .v2.patch, .v3.patch


 HBASE-4811 adds feature of reverse scan. This JIRA adds the support for small 
 reverse scan.
 This is activated when both 'reversed' and 'small' attributes are true in 
 Scan Object



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10532) Make KeyValueComparator in KeyValue to accept Cell instead of KeyValue.

2014-03-04 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10532:
---

Attachment: HBASE-10532_2.patch

Patch that moves required comparators to the cell comparator. We can add as and 
when we need it.

 Make KeyValueComparator in KeyValue to accept Cell instead of KeyValue.
 ---

 Key: HBASE-10532
 URL: https://issues.apache.org/jira/browse/HBASE-10532
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0

 Attachments: HBASE-10532.patch, HBASE-10532.patch, HBASE-10532_2.patch


 public int compareRows(final KeyValue left, final KeyValue right)
 public boolean matchingRows(final KeyValue left, final KeyValue right)
 We can make them to use Cells instead of KeyValue incase we need to use them 
 for comparison of any type of cell in future.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10532) Make KeyValueComparator in KeyValue to accept Cell instead of KeyValue.

2014-03-04 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10532:
---

Status: Open  (was: Patch Available)

 Make KeyValueComparator in KeyValue to accept Cell instead of KeyValue.
 ---

 Key: HBASE-10532
 URL: https://issues.apache.org/jira/browse/HBASE-10532
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0

 Attachments: HBASE-10532.patch, HBASE-10532.patch, HBASE-10532_2.patch


 public int compareRows(final KeyValue left, final KeyValue right)
 public boolean matchingRows(final KeyValue left, final KeyValue right)
 We can make them to use Cells instead of KeyValue incase we need to use them 
 for comparison of any type of cell in future.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10532) Make KeyValueComparator in KeyValue to accept Cell instead of KeyValue.

2014-03-04 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10532:
---

Status: Patch Available  (was: Open)

 Make KeyValueComparator in KeyValue to accept Cell instead of KeyValue.
 ---

 Key: HBASE-10532
 URL: https://issues.apache.org/jira/browse/HBASE-10532
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0

 Attachments: HBASE-10532.patch, HBASE-10532.patch, HBASE-10532_2.patch


 public int compareRows(final KeyValue left, final KeyValue right)
 public boolean matchingRows(final KeyValue left, final KeyValue right)
 We can make them to use Cells instead of KeyValue incase we need to use them 
 for comparison of any type of cell in future.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10662) RegionScanner should be closed and according lease should be cancelled in regionserver immediately if we find the related region has been re-opened during performing s

2014-03-04 Thread Feng Honghua (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919208#comment-13919208
 ] 

Feng Honghua commented on HBASE-10662:
--

When we find the region has been re-opened during serving scan request from 
client in regionserver, if we only remove RegionScannerHolder  from scanners 
but don't close the related scanner. The related lease will be cancelled when 
it expires, but the related region scanner won't be closed in leaseExpired as 
expected:
{code}
  public void leaseExpired() {
  RegionScannerHolder rsh = scanners.remove(this.scannerName);
  if (rsh != null) {
RegionScanner s = rsh.s;
LOG.info(Scanner  + this.scannerName +  lease expired on region 
+ s.getRegionInfo().getRegionNameAsString());
try {
  HRegion region = getRegion(s.getRegionInfo().getRegionName());
  if (region != null  region.getCoprocessorHost() != null) {
region.getCoprocessorHost().preScannerClose(s);
  }

  s.close();
  if (region != null  region.getCoprocessorHost() != null) {
region.getCoprocessorHost().postScannerClose(s);
  }
} catch (IOException e) {
  LOG.error(Closing scanner for 
  + s.getRegionInfo().getRegionNameAsString(), e);
}
  } else {
LOG.info(Scanner  + this.scannerName +  lease expired);
  }
}
{code}
In above code, scanners.remove(this.scannerName) will return a null rsh since 
it has been removed earlier, so the region scanner can't be closed here, which 
means the related region scanner doesn't have a chance to be closed ever.

 RegionScanner should be closed and according lease should be cancelled in 
 regionserver immediately if we find the related region has been re-opened 
 during performing scan request
 --

 Key: HBASE-10662
 URL: https://issues.apache.org/jira/browse/HBASE-10662
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Feng Honghua
Assignee: Feng Honghua
 Attachments: HBASE-10662-trunk_v1.patch


 During regionserver processes scan request from client, it fails the request 
 by throwing a wrapped NotServingRegionException to client if it finds the 
 region related to the passed-in scanner-id has been re-opened, and it also 
 removes the RegionScannerHolder from the scanners. In fact under this case, 
 the old and invalid RegionScanner related to the passed-in scanner-id should 
 be closed and the related lease should be cancelled at the mean time as well.
 Currently region's related scanners aren't closed when closing the region, a 
 region scanner is closed only when requested explicitly by client, or by 
 expiration of the related lease, in this sense the close of region scanners 
 is quite passive and lag.
 Sounds reasonable to cleanup all related scanners and cancel these scanners' 
 leases after closing a region?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10663) Refactor/cleanup of class Leases and ScannerListener.leaseExpired

2014-03-04 Thread Feng Honghua (JIRA)
Feng Honghua created HBASE-10663:


 Summary: Refactor/cleanup of class Leases and 
ScannerListener.leaseExpired
 Key: HBASE-10663
 URL: https://issues.apache.org/jira/browse/HBASE-10663
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: Feng Honghua
Assignee: Feng Honghua
Priority: Minor


Some cleanup of Leases and ScannerListener.leaseExpired:
# Reject renewLease if stopRequested (same as addLease, stopRequested means 
Leases is asked to stop and is waiting for all remained leases to expire)
# Raise log level from info to warn for case that no related region scanner 
found when a lease expires (should it be an error?)
# Replace System.currentTimeMillis() with 
EnvironmentEdgeManager.currentTimeMillis()
# Correct some wrong comments and remove some irrelevant comments(Queue rather 
than Map is used for leases before?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10663) Refactor/cleanup of class Leases and ScannerListener.leaseExpired

2014-03-04 Thread Feng Honghua (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Honghua updated HBASE-10663:
-

Attachment: HBASE-10663-trunk_v1.patch

 Refactor/cleanup of class Leases and ScannerListener.leaseExpired
 -

 Key: HBASE-10663
 URL: https://issues.apache.org/jira/browse/HBASE-10663
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: Feng Honghua
Assignee: Feng Honghua
Priority: Minor
 Attachments: HBASE-10663-trunk_v1.patch


 Some cleanup of Leases and ScannerListener.leaseExpired:
 # Reject renewLease if stopRequested (same as addLease, stopRequested means 
 Leases is asked to stop and is waiting for all remained leases to expire)
 # Raise log level from info to warn for case that no related region scanner 
 found when a lease expires (should it be an error?)
 # Replace System.currentTimeMillis() with 
 EnvironmentEdgeManager.currentTimeMillis()
 # Correct some wrong comments and remove some irrelevant comments(Queue 
 rather than Map is used for leases before?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10622) Improve log and Exceptions in Export Snapshot

2014-03-04 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-10622:


   Resolution: Fixed
Fix Version/s: 0.94.18
   0.98.1
   0.96.2
   Status: Resolved  (was: Patch Available)

 Improve log and Exceptions in Export Snapshot 
 --

 Key: HBASE-10622
 URL: https://issues.apache.org/jira/browse/HBASE-10622
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-10622-v0.patch, HBASE-10622-v1.patch, 
 HBASE-10622-v2.patch, HBASE-10622-v3.patch, HBASE-10622-v4.patch


 from the logs of export snapshot is not really clear what's going on,
 adding some extra information useful to debug, and in some places the real 
 exception can be thrown



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-9270) [0.94] FSTableDescriptors caching is racy

2014-03-04 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated HBASE-9270:
---

Attachment: HBASE-9270.1.patch

Patch has only forced return, as 0.94 has refined getTableInfoModtime(), 
getTableDirModtime(), and overloaded getTableInfoPath() methods.

 [0.94] FSTableDescriptors caching is racy
 -

 Key: HBASE-9270
 URL: https://issues.apache.org/jira/browse/HBASE-9270
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.11
Reporter: Andrew Purtell
Priority: Minor
 Attachments: HBASE-9270.1.patch


 An occasionally failing test in 0.92 branch that concurrently executes master 
 operations on a single table found this problem in FSTableDescriptors:
 {code}
 diff --git src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java 
 src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java
 index e882621..b0042cd 100644
 --- src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java
 +++ src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java
 @@ -221,8 +221,15 @@ public class FSTableDescriptors implements 
 TableDescriptors {
  if 
 (HConstants.HBASE_NON_USER_TABLE_DIRS.contains(htd.getNameAsString())) {
throw new NotImplementedException();
  }
 -if (!this.fsreadonly) updateHTableDescriptor(this.fs, this.rootdir, htd);
 -long modtime = getTableInfoModtime(this.fs, this.rootdir, 
 htd.getNameAsString());
 +if (fsreadonly) {
 +  // Cannot cache here.
 +  // We can't know if a modtime from the most recent file found in a
 +  // directory listing at some arbitrary point in time still corresponds
 +  // to the latest, nor that our htd is the latest.
 +  return;
 +}
 +// Cache with the modtime of the descriptor we wrote
 +long modtime = updateHTableDescriptor(this.fs, this.rootdir, 
 htd).getModificationTime();
  this.cache.put(htd.getNameAsString(), new 
 TableDescriptorModtime(modtime, htd));
}
 {code}
 After HBASE-7305 master operations are serialized by a write lock on the 
 table.
 However, 0.94 has code with the same issue:
 {code}
   @Override
   public void add(HTableDescriptor htd) throws IOException {
 if (Bytes.equals(HConstants.ROOT_TABLE_NAME, htd.getName())) {
   throw new NotImplementedException();
 }
 if (Bytes.equals(HConstants.META_TABLE_NAME, htd.getName())) {
   throw new NotImplementedException();
 }
 if (HConstants.HBASE_NON_USER_TABLE_DIRS.contains(htd.getNameAsString())) 
 {
   throw new NotImplementedException();
 }
 if (!this.fsreadonly) updateHTableDescriptor(this.fs, this.rootdir, htd);
 String tableName = htd.getNameAsString();
 long modtime = getTableInfoModtime(this.fs, this.rootdir, tableName);
 long dirmodtime = getTableDirModtime(this.fs, this.rootdir, tableName);
 this.cache.put(tableName, new TableDescriptorModtime(modtime, dirmodtime, 
 htd));
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-9270) [0.94] FSTableDescriptors caching is racy

2014-03-04 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated HBASE-9270:
---

Status: Patch Available  (was: Open)

Patch has only forced return, as 0.94 has refined getTableInfoModtime(), 
getTableDirModtime(), and overloaded getTableInfoPath() methods.

 [0.94] FSTableDescriptors caching is racy
 -

 Key: HBASE-9270
 URL: https://issues.apache.org/jira/browse/HBASE-9270
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.11
Reporter: Andrew Purtell
Priority: Minor
 Attachments: HBASE-9270.1.patch


 An occasionally failing test in 0.92 branch that concurrently executes master 
 operations on a single table found this problem in FSTableDescriptors:
 {code}
 diff --git src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java 
 src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java
 index e882621..b0042cd 100644
 --- src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java
 +++ src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java
 @@ -221,8 +221,15 @@ public class FSTableDescriptors implements 
 TableDescriptors {
  if 
 (HConstants.HBASE_NON_USER_TABLE_DIRS.contains(htd.getNameAsString())) {
throw new NotImplementedException();
  }
 -if (!this.fsreadonly) updateHTableDescriptor(this.fs, this.rootdir, htd);
 -long modtime = getTableInfoModtime(this.fs, this.rootdir, 
 htd.getNameAsString());
 +if (fsreadonly) {
 +  // Cannot cache here.
 +  // We can't know if a modtime from the most recent file found in a
 +  // directory listing at some arbitrary point in time still corresponds
 +  // to the latest, nor that our htd is the latest.
 +  return;
 +}
 +// Cache with the modtime of the descriptor we wrote
 +long modtime = updateHTableDescriptor(this.fs, this.rootdir, 
 htd).getModificationTime();
  this.cache.put(htd.getNameAsString(), new 
 TableDescriptorModtime(modtime, htd));
}
 {code}
 After HBASE-7305 master operations are serialized by a write lock on the 
 table.
 However, 0.94 has code with the same issue:
 {code}
   @Override
   public void add(HTableDescriptor htd) throws IOException {
 if (Bytes.equals(HConstants.ROOT_TABLE_NAME, htd.getName())) {
   throw new NotImplementedException();
 }
 if (Bytes.equals(HConstants.META_TABLE_NAME, htd.getName())) {
   throw new NotImplementedException();
 }
 if (HConstants.HBASE_NON_USER_TABLE_DIRS.contains(htd.getNameAsString())) 
 {
   throw new NotImplementedException();
 }
 if (!this.fsreadonly) updateHTableDescriptor(this.fs, this.rootdir, htd);
 String tableName = htd.getNameAsString();
 long modtime = getTableInfoModtime(this.fs, this.rootdir, tableName);
 long dirmodtime = getTableDirModtime(this.fs, this.rootdir, tableName);
 this.cache.put(tableName, new TableDescriptorModtime(modtime, dirmodtime, 
 htd));
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10537) Let the ExportSnapshot mapper fail and retry on error

2014-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919276#comment-13919276
 ] 

Hudson commented on HBASE-10537:


FAILURE: Integrated in HBase-0.94-security #428 (See 
[https://builds.apache.org/job/HBase-0.94-security/428/])
HBASE-10537 Let the ExportSnapshot mapper fail and retry on error (mbertozzi: 
rev 1574016)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java


 Let the ExportSnapshot mapper fail and retry on error
 -

 Key: HBASE-10537
 URL: https://issues.apache.org/jira/browse/HBASE-10537
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.96.2, 0.98.1, 0.99.0

 Attachments: HBASE-10537-v1.patch, HBASE-10537-v2.patch


 Instead of completing the job, and force the user to re-run the export if 
 something failed, let the Mapper fail and retry automatically based on the 
 job.getMaxMapAttempts()



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10567) Add overwrite manifest option to ExportSnapshot

2014-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919274#comment-13919274
 ] 

Hudson commented on HBASE-10567:


FAILURE: Integrated in HBase-0.94-security #428 (See 
[https://builds.apache.org/job/HBase-0.94-security/428/])
HBASE-10567 Add overwrite manifest option to ExportSnapshot (mbertozzi: rev 
1574017)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java


 Add overwrite manifest option to ExportSnapshot
 ---

 Key: HBASE-10567
 URL: https://issues.apache.org/jira/browse/HBASE-10567
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.98.0, 0.94.16, 0.99.0, 0.96.1.1
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.96.2, 0.98.1, 0.99.0

 Attachments: HBASE-10567-v0.patch, HBASE-10567-v1.patch


 If you want to export a snapshot twice (e.g. in case you accidentally removed 
 a file and now your snapshot is corrupted) you have to manually remove the 
 .hbase-snapshot/SNAPSHOT_NAME directory and then run the ExportSnapshot tool.
 Add an -overwrite option to this operation automatically.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9708) Improve Snapshot Name Error Message

2014-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919275#comment-13919275
 ] 

Hudson commented on HBASE-9708:
---

FAILURE: Integrated in HBase-0.94-security #428 (See 
[https://builds.apache.org/job/HBase-0.94-security/428/])
HBASE-9708 Improve Snapshot Name Error Message (Esteban Gutierrez) (mbertozzi: 
rev 1573962)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotDescriptionUtils.java


 Improve Snapshot Name Error Message
 ---

 Key: HBASE-9708
 URL: https://issues.apache.org/jira/browse/HBASE-9708
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.94.2
Reporter: Jesse Anderson
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-9708.1.patch, HBASE-9708.1.patch


 The output for snapshots when you enter an invalid snapshot name talks about 
 User-space table names instead of Snapshot names. The error message 
 should say Snapshot names can only contain
 Here is an example of the output:
 {noformat}
 hbase(main):001:0 snapshot 'user', 'asdf asdf'
 ERROR: java.lang.IllegalArgumentException: Illegal character 32 at 4. 
 User-space table names can only contain 'word characters': i.e. 
 [a-zA-Z_0-9-.]: asdf asdf
 Here is some help for this command:
 Take a snapshot of specified table. Examples:
   hbase snapshot 'sourceTable', 'snapshotName'
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10532) Make KeyValueComparator in KeyValue to accept Cell instead of KeyValue.

2014-03-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919297#comment-13919297
 ] 

Hadoop QA commented on HBASE-10532:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12632483/HBASE-10532_2.patch
  against trunk revision .
  ATTACHMENT ID: 12632483

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  public static int compareRowsWithCommonFamilyPrefix(Cell left, Cell 
right, int familyCommonPrefix) {
+  public static int compareRowsWithQualifierFamilyPrefix(Cell left, Cell 
right, int qualCommonPrefix) {
+  public static int findCommonPrefixInQualifierPart(Cell left, Cell right, int 
qualifierCommonPrefix) {

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8877//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8877//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8877//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8877//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8877//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8877//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8877//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8877//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8877//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8877//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8877//console

This message is automatically generated.

 Make KeyValueComparator in KeyValue to accept Cell instead of KeyValue.
 ---

 Key: HBASE-10532
 URL: https://issues.apache.org/jira/browse/HBASE-10532
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0

 Attachments: HBASE-10532.patch, HBASE-10532.patch, HBASE-10532_2.patch


 public int compareRows(final KeyValue left, final KeyValue right)
 public boolean matchingRows(final KeyValue left, final KeyValue right)
 We can make them to use Cells instead of KeyValue incase we need to use them 
 for comparison of any type of cell in future.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10537) Let the ExportSnapshot mapper fail and retry on error

2014-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919300#comment-13919300
 ] 

Hudson commented on HBASE-10537:


FAILURE: Integrated in HBase-0.94-on-Hadoop-2 #38 (See 
[https://builds.apache.org/job/HBase-0.94-on-Hadoop-2/38/])
HBASE-10537 Let the ExportSnapshot mapper fail and retry on error (mbertozzi: 
rev 1574016)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java


 Let the ExportSnapshot mapper fail and retry on error
 -

 Key: HBASE-10537
 URL: https://issues.apache.org/jira/browse/HBASE-10537
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.96.2, 0.98.1, 0.99.0

 Attachments: HBASE-10537-v1.patch, HBASE-10537-v2.patch


 Instead of completing the job, and force the user to re-run the export if 
 something failed, let the Mapper fail and retry automatically based on the 
 job.getMaxMapAttempts()



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9708) Improve Snapshot Name Error Message

2014-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919299#comment-13919299
 ] 

Hudson commented on HBASE-9708:
---

FAILURE: Integrated in HBase-0.94-on-Hadoop-2 #38 (See 
[https://builds.apache.org/job/HBase-0.94-on-Hadoop-2/38/])
HBASE-9708 Improve Snapshot Name Error Message (Esteban Gutierrez) (mbertozzi: 
rev 1573962)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotDescriptionUtils.java


 Improve Snapshot Name Error Message
 ---

 Key: HBASE-9708
 URL: https://issues.apache.org/jira/browse/HBASE-9708
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.94.2
Reporter: Jesse Anderson
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-9708.1.patch, HBASE-9708.1.patch


 The output for snapshots when you enter an invalid snapshot name talks about 
 User-space table names instead of Snapshot names. The error message 
 should say Snapshot names can only contain
 Here is an example of the output:
 {noformat}
 hbase(main):001:0 snapshot 'user', 'asdf asdf'
 ERROR: java.lang.IllegalArgumentException: Illegal character 32 at 4. 
 User-space table names can only contain 'word characters': i.e. 
 [a-zA-Z_0-9-.]: asdf asdf
 Here is some help for this command:
 Take a snapshot of specified table. Examples:
   hbase snapshot 'sourceTable', 'snapshotName'
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10567) Add overwrite manifest option to ExportSnapshot

2014-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919298#comment-13919298
 ] 

Hudson commented on HBASE-10567:


FAILURE: Integrated in HBase-0.94-on-Hadoop-2 #38 (See 
[https://builds.apache.org/job/HBase-0.94-on-Hadoop-2/38/])
HBASE-10567 Add overwrite manifest option to ExportSnapshot (mbertozzi: rev 
1574017)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java


 Add overwrite manifest option to ExportSnapshot
 ---

 Key: HBASE-10567
 URL: https://issues.apache.org/jira/browse/HBASE-10567
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.98.0, 0.94.16, 0.99.0, 0.96.1.1
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.96.2, 0.98.1, 0.99.0

 Attachments: HBASE-10567-v0.patch, HBASE-10567-v1.patch


 If you want to export a snapshot twice (e.g. in case you accidentally removed 
 a file and now your snapshot is corrupted) you have to manually remove the 
 .hbase-snapshot/SNAPSHOT_NAME directory and then run the ExportSnapshot tool.
 Add an -overwrite option to this operation automatically.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10650) Fix incorrect handling of IE that restores current thread's interrupt status within while/for loops in RegionServer

2014-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919305#comment-13919305
 ] 

Hudson commented on HBASE-10650:


SUCCESS: Integrated in HBase-TRUNK #4975 (See 
[https://builds.apache.org/job/HBase-TRUNK/4975/])
HBASE-10650 Fix incorrect handling of IE that restores current thread's 
interrupt status within while/for loops in RegionServer (Feng Honghua) 
(nkeywal: rev 1573942)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java


 Fix incorrect handling of IE that restores current thread's interrupt status 
 within while/for loops in RegionServer
 ---

 Key: HBASE-10650
 URL: https://issues.apache.org/jira/browse/HBASE-10650
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver
Reporter: Feng Honghua
Assignee: Feng Honghua
 Fix For: 0.99.0

 Attachments: HBASE-10650-trunk_v1.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10652) Fix incorrect handling of IE that restores current thread's interrupt status within while/for loops in rpc

2014-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919304#comment-13919304
 ] 

Hudson commented on HBASE-10652:


SUCCESS: Integrated in HBase-TRUNK #4975 (See 
[https://builds.apache.org/job/HBase-TRUNK/4975/])
HBASE-10652 Fix incorrect handling of IE that restores current thread's 
interrupt status within while/for loops in rpc (Feng Honghua) (nkeywal: rev 
1573937)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcScheduler.java


 Fix incorrect handling of IE that restores current thread's interrupt status 
 within while/for loops in rpc
 --

 Key: HBASE-10652
 URL: https://issues.apache.org/jira/browse/HBASE-10652
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Replication
Reporter: Feng Honghua
Assignee: Feng Honghua
Priority: Minor
 Fix For: 0.99.0

 Attachments: HBASE-10652-trunk_v1.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-9294) NPE in /rs-status during RS shutdown

2014-03-04 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated HBASE-9294:
---

Labels: patch  (was: )
Status: Patch Available  (was: Open)

Patch available

 NPE in /rs-status during RS shutdown
 

 Key: HBASE-9294
 URL: https://issues.apache.org/jira/browse/HBASE-9294
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.95.2
Reporter: Steve Loughran
Priority: Minor
  Labels: patch
 Attachments: HBASE-9294.1.patch


 While hitting reload to see when a kill-initiated RS shutdown would make the 
 Web UI go away, I got a stack trace from an NPE



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-9294) NPE in /rs-status during RS shutdown

2014-03-04 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated HBASE-9294:
---

Attachment: HBASE-9294.1.patch

Patch available.

 NPE in /rs-status during RS shutdown
 

 Key: HBASE-9294
 URL: https://issues.apache.org/jira/browse/HBASE-9294
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.95.2
Reporter: Steve Loughran
Priority: Minor
  Labels: patch
 Attachments: HBASE-9294.1.patch


 While hitting reload to see when a kill-initiated RS shutdown would make the 
 Web UI go away, I got a stack trace from an NPE



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10549) when there is a hole,LoadIncrementalHFiles will hung up in an infinite loop.

2014-03-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919306#comment-13919306
 ] 

Hadoop QA commented on HBASE-10549:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12632481/HBASE-10549-trunk.patch
  against trunk revision .
  ATTACHMENT ID: 12632481

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8876//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8876//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8876//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8876//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8876//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8876//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8876//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8876//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8876//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8876//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8876//console

This message is automatically generated.

 when there is a hole,LoadIncrementalHFiles will hung up in an infinite loop.
 

 Key: HBASE-10549
 URL: https://issues.apache.org/jira/browse/HBASE-10549
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.94.11
Reporter: yuanxinen
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-10549-trunk.patch


 First,I explan my test steps.
 1.importtsv
 2.split the region
 3.delete the region info from .META.(make a hole)
 4.LoadIncrementalHFiles (this step will hung up in an infinite loop)
 I check the log,there are two issues
 1.it create _tmp folder in an infinite loop.
 hdfs://hacluster/output3/i/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/test_table,136.bottom
 2.when slpliting the hfile,it put the first line data(1211) into two 
 files(top and bottom)
 Input 
 File=hdfs://hacluster/output3/i/3ac6ec287c644a8fb72d96b13e31f576,outFile=hdfs://hacluster/output3/i/_tmp/test_table,2.top,KeyValue=1211/i:value/1390469306407/Put/vlen=1/ts=0
 Input 
 File=hdfs://hacluster/output3/i/3ac6ec287c644a8fb72d96b13e31f576,outFile=hdfs://hacluster/output3/i/_tmp/test_table,2.bottom,KeyValue=1211/i:value/1390469306407/Put/vlen=1/ts=0
 and then I check the code.
 So I think before spliting the hfile,we should check the consistency of 
 startkey and endkey,if something wrong,we should throw the exception,and stop 
 LoadIncrementalHFiles.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9270) [0.94] FSTableDescriptors caching is racy

2014-03-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919312#comment-13919312
 ] 

Hadoop QA commented on HBASE-9270:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12632491/HBASE-9270.1.patch
  against trunk revision .
  ATTACHMENT ID: 12632491

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8878//console

This message is automatically generated.

 [0.94] FSTableDescriptors caching is racy
 -

 Key: HBASE-9270
 URL: https://issues.apache.org/jira/browse/HBASE-9270
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.11
Reporter: Andrew Purtell
Priority: Minor
 Attachments: HBASE-9270.1.patch


 An occasionally failing test in 0.92 branch that concurrently executes master 
 operations on a single table found this problem in FSTableDescriptors:
 {code}
 diff --git src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java 
 src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java
 index e882621..b0042cd 100644
 --- src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java
 +++ src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java
 @@ -221,8 +221,15 @@ public class FSTableDescriptors implements 
 TableDescriptors {
  if 
 (HConstants.HBASE_NON_USER_TABLE_DIRS.contains(htd.getNameAsString())) {
throw new NotImplementedException();
  }
 -if (!this.fsreadonly) updateHTableDescriptor(this.fs, this.rootdir, htd);
 -long modtime = getTableInfoModtime(this.fs, this.rootdir, 
 htd.getNameAsString());
 +if (fsreadonly) {
 +  // Cannot cache here.
 +  // We can't know if a modtime from the most recent file found in a
 +  // directory listing at some arbitrary point in time still corresponds
 +  // to the latest, nor that our htd is the latest.
 +  return;
 +}
 +// Cache with the modtime of the descriptor we wrote
 +long modtime = updateHTableDescriptor(this.fs, this.rootdir, 
 htd).getModificationTime();
  this.cache.put(htd.getNameAsString(), new 
 TableDescriptorModtime(modtime, htd));
}
 {code}
 After HBASE-7305 master operations are serialized by a write lock on the 
 table.
 However, 0.94 has code with the same issue:
 {code}
   @Override
   public void add(HTableDescriptor htd) throws IOException {
 if (Bytes.equals(HConstants.ROOT_TABLE_NAME, htd.getName())) {
   throw new NotImplementedException();
 }
 if (Bytes.equals(HConstants.META_TABLE_NAME, htd.getName())) {
   throw new NotImplementedException();
 }
 if (HConstants.HBASE_NON_USER_TABLE_DIRS.contains(htd.getNameAsString())) 
 {
   throw new NotImplementedException();
 }
 if (!this.fsreadonly) updateHTableDescriptor(this.fs, this.rootdir, htd);
 String tableName = htd.getNameAsString();
 long modtime = getTableInfoModtime(this.fs, this.rootdir, tableName);
 long dirmodtime = getTableDirModtime(this.fs, this.rootdir, tableName);
 this.cache.put(tableName, new TableDescriptorModtime(modtime, dirmodtime, 
 htd));
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10531) Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo

2014-03-04 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919313#comment-13919313
 ] 

ramkrishna.s.vasudevan commented on HBASE-10531:


bq.otherwise we'll add yet another copy to an already expensive part of the 
scanning.
I have a way to work around this.  Now as we are creating a cell here for 
comparision, I will create a new KV here and that will not do any copy.
{code}
 public static class DerivedKeyValue extends KeyValue {

private int length = 0;
private int offset = 0;
private byte[] b;

public DerivedKeyValue(byte[] b, int offset, int length) {
  super(b,offset,length);
  this.b = b;
  setKeyOffset(offset);
  setKeyLength(length);
  this.length = length;
  this.offset = offset;
}

public void setKeyLength(int kLength) {
  this.length = kLength;
}

public void setKeyOffset(int kOffset) {
  this.offset = kOffset;
}

@Override
public int getKeyOffset() {
return this.offset;
}

@Override
public byte[] getRowArray() {
  // TODO Auto-generated method stub
  return b;
}

@Override
public int getRowOffset() {
  // TODO Auto-generated method stub
  return getKeyOffset() + Bytes.SIZEOF_SHORT;
}

@Override
public byte[] getFamilyArray() {
  // TODO Auto-generated method stub
  return b;
}

@Override
public byte getFamilyLength() {
  // TODO Auto-generated method stub
  return this.b[getFamilyOffset() - 1];
}

@Override
public int getFamilyOffset() {
  // TODO Auto-generated method stub
  return this.offset  + Bytes.SIZEOF_SHORT + getRowLength() + 
Bytes.SIZEOF_BYTE;
}

@Override
public byte[] getQualifierArray() {
  // TODO Auto-generated method stub
  return b;
}

@Override
public int getQualifierLength() {
  // TODO Auto-generated method stub
  return getQualifierLength(getRowLength(),getFamilyLength());
}

@Override
public int getQualifierOffset() {
  // TODO Auto-generated method stub
  return super.getQualifierOffset();
}
@Override
public int getKeyLength() {
  // TODO Auto-generated method stub
  return length;
}
@Override
public short getRowLength() {
  return Bytes.toShort(this.b, getKeyOffset());
}

private int getQualifierLength(int rlength, int flength) {
  return getKeyLength() - (int) getKeyDataStructureSize(rlength, flength, 
0);
}
}
{code}
Now here if you see the only difference between a normal Kv and the one craeted 
by KeyValue.createKeyValueFromKeyValue, we actually don't need the first 8 
bytes(ROW_OFFSET).  so by avoiding those bytes if we are able to implement our 
own getKeyLength, getRowOffset, etc we will be able to a proper comparison. Now 
we can compare the individual rows, families, qualifiers individually.  What 
you think?  so we avoid byte copy but we will create a new object.  But I think 
that is going to be cheaper.


 Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo
 

 Key: HBASE-10531
 URL: https://issues.apache.org/jira/browse/HBASE-10531
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0

 Attachments: HBASE-10531.patch, HBASE-10531_1.patch


 Currently the byte[] key passed to HFileScanner.seekTo and 
 HFileScanner.reseekTo, is a combination of row, cf, qual, type and ts.  And 
 the caller forms this by using kv.getBuffer, which is actually deprecated.  
 So see how this can be achieved considering kv.getBuffer is removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-10531) Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo

2014-03-04 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919313#comment-13919313
 ] 

ramkrishna.s.vasudevan edited comment on HBASE-10531 at 3/4/14 11:54 AM:
-

bq.otherwise we'll add yet another copy to an already expensive part of the 
scanning.
I have a way to work around this.  Now as we are creating a cell here for 
comparision, I will create a new KV here and that will not do any copy.
{code}
 public static class DerivedKeyValue extends KeyValue {

private int length = 0;
private int offset = 0;
private byte[] b;

public DerivedKeyValue(byte[] b, int offset, int length) {
  super(b,offset,length);
  this.b = b;
  setKeyOffset(offset);
  setKeyLength(length);
  this.length = length;
  this.offset = offset;
}

public void setKeyLength(int kLength) {
  this.length = kLength;
}

public void setKeyOffset(int kOffset) {
  this.offset = kOffset;
}

@Override
public int getKeyOffset() {
return this.offset;
}

@Override
public byte[] getRowArray() {
  // TODO Auto-generated method stub
  return b;
}

@Override
public int getRowOffset() {
  // TODO Auto-generated method stub
  return getKeyOffset() + Bytes.SIZEOF_SHORT;
}

@Override
public byte[] getFamilyArray() {
  // TODO Auto-generated method stub
  return b;
}

@Override
public byte getFamilyLength() {
  // TODO Auto-generated method stub
  return this.b[getFamilyOffset() - 1];
}

@Override
public int getFamilyOffset() {
  // TODO Auto-generated method stub
  return this.offset  + Bytes.SIZEOF_SHORT + getRowLength() + 
Bytes.SIZEOF_BYTE;
}

@Override
public byte[] getQualifierArray() {
  // TODO Auto-generated method stub
  return b;
}

@Override
public int getQualifierLength() {
  // TODO Auto-generated method stub
  return getQualifierLength(getRowLength(),getFamilyLength());
}

@Override
public int getQualifierOffset() {
  // TODO Auto-generated method stub
  return super.getQualifierOffset();
}
@Override
public int getKeyLength() {
  // TODO Auto-generated method stub
  return length;
}
@Override
public short getRowLength() {
  return Bytes.toShort(this.b, getKeyOffset());
}

private int getQualifierLength(int rlength, int flength) {
  return getKeyLength() - (int) getKeyDataStructureSize(rlength, flength, 
0);
}
}
{code}
Now here if you see the only difference between a normal Kv and the one craeted 
by KeyValue.createKeyValueFromKeyValue, we actually don't need the first 8 
bytes(ROW_OFFSET).  so by avoiding those bytes if we are able to implement our 
own getKeyLength, getRowOffset, etc we will be able to a proper comparison. Now 
we can compare the individual rows, families, qualifiers individually.  What 
you think?  so we avoid byte copy but we will create a new object.  But I think 
that is going to be cheaper.
So we can create a cell like 
{code}
Cell r = new KeyValue.DerivedKeyValue(arr[mid], 0, arr[mid].length);
{code}



was (Author: ram_krish):
bq.otherwise we'll add yet another copy to an already expensive part of the 
scanning.
I have a way to work around this.  Now as we are creating a cell here for 
comparision, I will create a new KV here and that will not do any copy.
{code}
 public static class DerivedKeyValue extends KeyValue {

private int length = 0;
private int offset = 0;
private byte[] b;

public DerivedKeyValue(byte[] b, int offset, int length) {
  super(b,offset,length);
  this.b = b;
  setKeyOffset(offset);
  setKeyLength(length);
  this.length = length;
  this.offset = offset;
}

public void setKeyLength(int kLength) {
  this.length = kLength;
}

public void setKeyOffset(int kOffset) {
  this.offset = kOffset;
}

@Override
public int getKeyOffset() {
return this.offset;
}

@Override
public byte[] getRowArray() {
  // TODO Auto-generated method stub
  return b;
}

@Override
public int getRowOffset() {
  // TODO Auto-generated method stub
  return getKeyOffset() + Bytes.SIZEOF_SHORT;
}

@Override
public byte[] getFamilyArray() {
  // TODO Auto-generated method stub
  return b;
}

@Override
public byte getFamilyLength() {
  // TODO Auto-generated method stub
  return this.b[getFamilyOffset() - 1];
}

@Override
public int getFamilyOffset() {
  // TODO Auto-generated method stub
  return this.offset  + Bytes.SIZEOF_SHORT + getRowLength() + 
Bytes.SIZEOF_BYTE;
}

@Override
public byte[] getQualifierArray() {
  // TODO 

[jira] [Commented] (HBASE-9294) NPE in /rs-status during RS shutdown

2014-03-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919315#comment-13919315
 ] 

Hadoop QA commented on HBASE-9294:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12632492/HBASE-9294.1.patch
  against trunk revision .
  ATTACHMENT ID: 12632492

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8879//console

This message is automatically generated.

 NPE in /rs-status during RS shutdown
 

 Key: HBASE-9294
 URL: https://issues.apache.org/jira/browse/HBASE-9294
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.95.2
Reporter: Steve Loughran
Priority: Minor
  Labels: patch
 Attachments: HBASE-9294.1.patch


 While hitting reload to see when a kill-initiated RS shutdown would make the 
 Web UI go away, I got a stack trace from an NPE



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9708) Improve Snapshot Name Error Message

2014-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919319#comment-13919319
 ] 

Hudson commented on HBASE-9708:
---

FAILURE: Integrated in hbase-0.96 #325 (See 
[https://builds.apache.org/job/hbase-0.96/325/])
HBASE-9708 Improve Snapshot Name Error Message (Esteban Gutierrez) (mbertozzi: 
rev 1573950)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/ClientSnapshotDescriptionUtils.java
* 
/hbase/branches/0.96/hbase-common/src/main/java/org/apache/hadoop/hbase/TableName.java


 Improve Snapshot Name Error Message
 ---

 Key: HBASE-9708
 URL: https://issues.apache.org/jira/browse/HBASE-9708
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.94.2
Reporter: Jesse Anderson
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-9708.1.patch, HBASE-9708.1.patch


 The output for snapshots when you enter an invalid snapshot name talks about 
 User-space table names instead of Snapshot names. The error message 
 should say Snapshot names can only contain
 Here is an example of the output:
 {noformat}
 hbase(main):001:0 snapshot 'user', 'asdf asdf'
 ERROR: java.lang.IllegalArgumentException: Illegal character 32 at 4. 
 User-space table names can only contain 'word characters': i.e. 
 [a-zA-Z_0-9-.]: asdf asdf
 Here is some help for this command:
 Take a snapshot of specified table. Examples:
   hbase snapshot 'sourceTable', 'snapshotName'
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-9355) HBaseTestingUtility#cleanupDataTestDirOnTestFS() doesn't close the FileSystem

2014-03-04 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated HBASE-9355:
---

Affects Version/s: 0.92.2
   Status: Patch Available  (was: Open)

Attached patch.
Think the configuration parameter - fs.automatic.close set to true seems 
better, rather than the boilerplate.What do you think?

 HBaseTestingUtility#cleanupDataTestDirOnTestFS() doesn't close the FileSystem
 -

 Key: HBASE-9355
 URL: https://issues.apache.org/jira/browse/HBASE-9355
 Project: HBase
  Issue Type: Test
Affects Versions: 0.92.2
Reporter: Ted Yu
Priority: Minor
 Attachments: HBASE-9355.1.patch


 Here is related code:
 {code}
   public boolean cleanupDataTestDirOnTestFS() throws IOException {
 boolean ret = getTestFileSystem().delete(dataTestDirOnTestFS, true);
 if (ret)
   dataTestDirOnTestFS = null;
 return ret;
   }
 {code}
 The FileSystem returned by getTestFileSystem() is not closed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-9355) HBaseTestingUtility#cleanupDataTestDirOnTestFS() doesn't close the FileSystem

2014-03-04 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated HBASE-9355:
---

Attachment: HBASE-9355.1.patch

Attached patch.
Think the configuration parameter - fs.automatic.close set to true seems 
better, rather than the boilerplate.What do you think?

 HBaseTestingUtility#cleanupDataTestDirOnTestFS() doesn't close the FileSystem
 -

 Key: HBASE-9355
 URL: https://issues.apache.org/jira/browse/HBASE-9355
 Project: HBase
  Issue Type: Test
Affects Versions: 0.92.2
Reporter: Ted Yu
Priority: Minor
 Attachments: HBASE-9355.1.patch


 Here is related code:
 {code}
   public boolean cleanupDataTestDirOnTestFS() throws IOException {
 boolean ret = getTestFileSystem().delete(dataTestDirOnTestFS, true);
 if (ret)
   dataTestDirOnTestFS = null;
 return ret;
   }
 {code}
 The FileSystem returned by getTestFileSystem() is not closed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9708) Improve Snapshot Name Error Message

2014-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919329#comment-13919329
 ] 

Hudson commented on HBASE-9708:
---

SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #185 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/185/])
HBASE-9708 Improve Snapshot Name Error Message (Esteban Gutierrez) (mbertozzi: 
rev 1573948)
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/ClientSnapshotDescriptionUtils.java
* 
/hbase/branches/0.98/hbase-common/src/main/java/org/apache/hadoop/hbase/TableName.java


 Improve Snapshot Name Error Message
 ---

 Key: HBASE-9708
 URL: https://issues.apache.org/jira/browse/HBASE-9708
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.94.2
Reporter: Jesse Anderson
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-9708.1.patch, HBASE-9708.1.patch


 The output for snapshots when you enter an invalid snapshot name talks about 
 User-space table names instead of Snapshot names. The error message 
 should say Snapshot names can only contain
 Here is an example of the output:
 {noformat}
 hbase(main):001:0 snapshot 'user', 'asdf asdf'
 ERROR: java.lang.IllegalArgumentException: Illegal character 32 at 4. 
 User-space table names can only contain 'word characters': i.e. 
 [a-zA-Z_0-9-.]: asdf asdf
 Here is some help for this command:
 Take a snapshot of specified table. Examples:
   hbase snapshot 'sourceTable', 'snapshotName'
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10662) RegionScanner should be closed and according lease should be cancelled in regionserver immediately if we find the related region has been re-opened during performing s

2014-03-04 Thread Feng Honghua (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919331#comment-13919331
 ] 

Feng Honghua commented on HBASE-10662:
--

This bug occurs not only when regionserver processes scan request after region 
re-open, but also when regionserver processes scan request after the region is 
moved out(due to balance or user's move request) of the regionserver : 
NotServingRegionException is thrown and the RegionServerHolder is removed from 
scanners in regionserver, but when leaseExpired is executed due to lease 
expires, the related region scanner can't be closed due to the according 
RegionScannerHolder has already been removed from scanners without closing the 
related regionscanner...

 RegionScanner should be closed and according lease should be cancelled in 
 regionserver immediately if we find the related region has been re-opened 
 during performing scan request
 --

 Key: HBASE-10662
 URL: https://issues.apache.org/jira/browse/HBASE-10662
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Feng Honghua
Assignee: Feng Honghua
 Attachments: HBASE-10662-trunk_v1.patch


 During regionserver processes scan request from client, it fails the request 
 by throwing a wrapped NotServingRegionException to client if it finds the 
 region related to the passed-in scanner-id has been re-opened, and it also 
 removes the RegionScannerHolder from the scanners. In fact under this case, 
 the old and invalid RegionScanner related to the passed-in scanner-id should 
 be closed and the related lease should be cancelled at the mean time as well.
 Currently region's related scanners aren't closed when closing the region, a 
 region scanner is closed only when requested explicitly by client, or by 
 expiration of the related lease, in this sense the close of region scanners 
 is quite passive and lag.
 Sounds reasonable to cleanup all related scanners and cancel these scanners' 
 leases after closing a region?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-4920) We need a mascot, a totem

2014-03-04 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919333#comment-13919333
 ] 

Jonathan Hsieh commented on HBASE-4920:
---

I withdraw and other options I presented and +1 the orca.  

 We need a mascot, a totem
 -

 Key: HBASE-4920
 URL: https://issues.apache.org/jira/browse/HBASE-4920
 Project: HBase
  Issue Type: Task
Reporter: stack
 Attachments: HBase Orca Logo.jpg, Orca_479990801.jpg, Screen shot 
 2011-11-30 at 4.06.17 PM.png, apache hbase orca logo_Proof 3.pdf, apache 
 logo_Proof 8.pdf, krake.zip, more_orcas.png, more_orcas2.png, photo (2).JPG, 
 plus_orca.png


 We need a totem for our t-shirt that is yet to be printed.  O'Reilly owns the 
 Clyesdale.  We need something else.
 We could have a fluffy little duck that quacks 'hbase!' when you squeeze it 
 and we could order boxes of them from some off-shore sweatshop that 
 subcontracts to a contractor who employs child labor only.
 Or we could have an Orca (Big!, Fast!, Killer!, and in a poem that Marcy from 
 Salesforce showed me, that was a bit too spiritual for me to be seen quoting 
 here, it had the Orca as the 'Guardian of the Cosmic Memory': i.e. in 
 translation, bigdata).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-4920) We need a mascot, a totem

2014-03-04 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919333#comment-13919333
 ] 

Jonathan Hsieh edited comment on HBASE-4920 at 3/4/14 12:38 PM:


I withdraw any other options I presented and +1 the orca.  


was (Author: jmhsieh):
I withdraw and other options I presented and +1 the orca.  

 We need a mascot, a totem
 -

 Key: HBASE-4920
 URL: https://issues.apache.org/jira/browse/HBASE-4920
 Project: HBase
  Issue Type: Task
Reporter: stack
 Attachments: HBase Orca Logo.jpg, Orca_479990801.jpg, Screen shot 
 2011-11-30 at 4.06.17 PM.png, apache hbase orca logo_Proof 3.pdf, apache 
 logo_Proof 8.pdf, krake.zip, more_orcas.png, more_orcas2.png, photo (2).JPG, 
 plus_orca.png


 We need a totem for our t-shirt that is yet to be printed.  O'Reilly owns the 
 Clyesdale.  We need something else.
 We could have a fluffy little duck that quacks 'hbase!' when you squeeze it 
 and we could order boxes of them from some off-shore sweatshop that 
 subcontracts to a contractor who employs child labor only.
 Or we could have an Orca (Big!, Fast!, Killer!, and in a poem that Marcy from 
 Salesforce showed me, that was a bit too spiritual for me to be seen quoting 
 here, it had the Orca as the 'Guardian of the Cosmic Memory': i.e. in 
 translation, bigdata).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10662) RegionScanner should be closed and according lease should be cancelled in regionserver immediately if we find the related region has been re-opened during performing sca

2014-03-04 Thread Feng Honghua (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Honghua updated HBASE-10662:
-

Description: 
During regionserver processes scan request from client, it fails the request by 
throwing a wrapped NotServingRegionException to client if it finds the region 
related to the passed-in scanner-id has been re-opened, and it also removes the 
RegionScannerHolder from the scanners. In fact under this case, the old and 
invalid RegionScanner related to the passed-in scanner-id should be closed and 
the related lease should be cancelled at the mean time as well.

Currently region's related scanners aren't closed when closing the region, a 
region scanner is closed only when requested explicitly by client, or by 
expiration of the related lease, in this sense the close of region scanners is 
quite passive and lag.

When regionserver processes scan request from client and can't find online 
region corresponding to the passed-in scanner-id (due to being moved out) or 
find the region has been re-opened, it throws NotServingRegionException and 
removes the corresponding RegionScannerHolder from scanners without closing the 
related region scanner (nor cancelling the related lease), but when the lease 
expires, the related region scanner still doesn't be closed since it doesn't 
present in scanners now.

  was:
During regionserver processes scan request from client, it fails the request by 
throwing a wrapped NotServingRegionException to client if it finds the region 
related to the passed-in scanner-id has been re-opened, and it also removes the 
RegionScannerHolder from the scanners. In fact under this case, the old and 
invalid RegionScanner related to the passed-in scanner-id should be closed and 
the related lease should be cancelled at the mean time as well.

Currently region's related scanners aren't closed when closing the region, a 
region scanner is closed only when requested explicitly by client, or by 
expiration of the related lease, in this sense the close of region scanners is 
quite passive and lag.

Sounds reasonable to cleanup all related scanners and cancel these scanners' 
leases after closing a region?


 RegionScanner should be closed and according lease should be cancelled in 
 regionserver immediately if we find the related region has been re-opened 
 during performing scan request
 --

 Key: HBASE-10662
 URL: https://issues.apache.org/jira/browse/HBASE-10662
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Feng Honghua
Assignee: Feng Honghua
 Attachments: HBASE-10662-trunk_v1.patch


 During regionserver processes scan request from client, it fails the request 
 by throwing a wrapped NotServingRegionException to client if it finds the 
 region related to the passed-in scanner-id has been re-opened, and it also 
 removes the RegionScannerHolder from the scanners. In fact under this case, 
 the old and invalid RegionScanner related to the passed-in scanner-id should 
 be closed and the related lease should be cancelled at the mean time as well.
 Currently region's related scanners aren't closed when closing the region, a 
 region scanner is closed only when requested explicitly by client, or by 
 expiration of the related lease, in this sense the close of region scanners 
 is quite passive and lag.
 When regionserver processes scan request from client and can't find online 
 region corresponding to the passed-in scanner-id (due to being moved out) or 
 find the region has been re-opened, it throws NotServingRegionException and 
 removes the corresponding RegionScannerHolder from scanners without closing 
 the related region scanner (nor cancelling the related lease), but when the 
 lease expires, the related region scanner still doesn't be closed since it 
 doesn't present in scanners now.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9708) Improve Snapshot Name Error Message

2014-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919344#comment-13919344
 ] 

Hudson commented on HBASE-9708:
---

FAILURE: Integrated in hbase-0.96-hadoop2 #224 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/224/])
HBASE-9708 Improve Snapshot Name Error Message (Esteban Gutierrez) (mbertozzi: 
rev 1573950)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/ClientSnapshotDescriptionUtils.java
* 
/hbase/branches/0.96/hbase-common/src/main/java/org/apache/hadoop/hbase/TableName.java


 Improve Snapshot Name Error Message
 ---

 Key: HBASE-9708
 URL: https://issues.apache.org/jira/browse/HBASE-9708
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.94.2
Reporter: Jesse Anderson
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-9708.1.patch, HBASE-9708.1.patch


 The output for snapshots when you enter an invalid snapshot name talks about 
 User-space table names instead of Snapshot names. The error message 
 should say Snapshot names can only contain
 Here is an example of the output:
 {noformat}
 hbase(main):001:0 snapshot 'user', 'asdf asdf'
 ERROR: java.lang.IllegalArgumentException: Illegal character 32 at 4. 
 User-space table names can only contain 'word characters': i.e. 
 [a-zA-Z_0-9-.]: asdf asdf
 Here is some help for this command:
 Take a snapshot of specified table. Examples:
   hbase snapshot 'sourceTable', 'snapshotName'
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10622) Improve log and Exceptions in Export Snapshot

2014-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919345#comment-13919345
 ] 

Hudson commented on HBASE-10622:


FAILURE: Integrated in hbase-0.96-hadoop2 #224 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/224/])
HBASE-10622 Improve log and Exceptions in Export Snapshot (mbertozzi: rev 
1574033)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java


 Improve log and Exceptions in Export Snapshot 
 --

 Key: HBASE-10622
 URL: https://issues.apache.org/jira/browse/HBASE-10622
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-10622-v0.patch, HBASE-10622-v1.patch, 
 HBASE-10622-v2.patch, HBASE-10622-v3.patch, HBASE-10622-v4.patch


 from the logs of export snapshot is not really clear what's going on,
 adding some extra information useful to debug, and in some places the real 
 exception can be thrown



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10653) Incorrect table status in HBase shell Describe

2014-03-04 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919346#comment-13919346
 ] 

Jonathan Hsieh commented on HBASE-10653:


Ive been annoyed by the formatter for columnar data in the shell for a while 
-- we need to add a mode where it is not reformatted.

 Incorrect table status in HBase shell Describe
 --

 Key: HBASE-10653
 URL: https://issues.apache.org/jira/browse/HBASE-10653
 Project: HBase
  Issue Type: Bug
  Components: shell
Reporter: Biju Nair
  Labels: HbaseShell, describe

 Describe output of table which is disabled shows as enabled.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10646) Enable security features by default for 1.0

2014-03-04 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919347#comment-13919347
 ] 

Jonathan Hsieh commented on HBASE-10646:


bq. The security features can mostly be enabled independently, although most 
features depend on secure authentication and secure RPC.

My understanding is that secure rpc is a separate implementation from the 
normal rpc today.  Does merging the secure rpc into the normal rpc make sense 
-- a negotiation at connection time and a runtime variable that says requires 
or doesn't require secure rpc?

bq. For this JIRA, it could be sufficient to enable most of the security 
features in the default configuration, excepting those which have, due to their 
nature, a performance consequence.

Can we just have a single security == true or security  == false config 
property?  For snapshots we added so that users only had to set that -- all the 
various plugins required for it to work got added when snapshots.enabled was 
set to true.

 Enable security features by default for 1.0
 ---

 Key: HBASE-10646
 URL: https://issues.apache.org/jira/browse/HBASE-10646
 Project: HBase
  Issue Type: Task
Affects Versions: 0.99.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell

 As discussed in the last PMC meeting, we should enable security features by 
 default in 1.0.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10567) Add overwrite manifest option to ExportSnapshot

2014-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919352#comment-13919352
 ] 

Hudson commented on HBASE-10567:


FAILURE: Integrated in HBase-0.94 #1307 (See 
[https://builds.apache.org/job/HBase-0.94/1307/])
HBASE-10567 Add overwrite manifest option to ExportSnapshot (mbertozzi: rev 
1574017)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java


 Add overwrite manifest option to ExportSnapshot
 ---

 Key: HBASE-10567
 URL: https://issues.apache.org/jira/browse/HBASE-10567
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.98.0, 0.94.16, 0.99.0, 0.96.1.1
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.96.2, 0.98.1, 0.99.0

 Attachments: HBASE-10567-v0.patch, HBASE-10567-v1.patch


 If you want to export a snapshot twice (e.g. in case you accidentally removed 
 a file and now your snapshot is corrupted) you have to manually remove the 
 .hbase-snapshot/SNAPSHOT_NAME directory and then run the ExportSnapshot tool.
 Add an -overwrite option to this operation automatically.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10537) Let the ExportSnapshot mapper fail and retry on error

2014-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919354#comment-13919354
 ] 

Hudson commented on HBASE-10537:


FAILURE: Integrated in HBase-0.94 #1307 (See 
[https://builds.apache.org/job/HBase-0.94/1307/])
HBASE-10537 Let the ExportSnapshot mapper fail and retry on error (mbertozzi: 
rev 1574016)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java


 Let the ExportSnapshot mapper fail and retry on error
 -

 Key: HBASE-10537
 URL: https://issues.apache.org/jira/browse/HBASE-10537
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.96.2, 0.98.1, 0.99.0

 Attachments: HBASE-10537-v1.patch, HBASE-10537-v2.patch


 Instead of completing the job, and force the user to re-run the export if 
 something failed, let the Mapper fail and retry automatically based on the 
 job.getMaxMapAttempts()



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9708) Improve Snapshot Name Error Message

2014-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919353#comment-13919353
 ] 

Hudson commented on HBASE-9708:
---

FAILURE: Integrated in HBase-0.94 #1307 (See 
[https://builds.apache.org/job/HBase-0.94/1307/])
HBASE-9708 Improve Snapshot Name Error Message (Esteban Gutierrez) (mbertozzi: 
rev 1573962)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotDescriptionUtils.java


 Improve Snapshot Name Error Message
 ---

 Key: HBASE-9708
 URL: https://issues.apache.org/jira/browse/HBASE-9708
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.94.2
Reporter: Jesse Anderson
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-9708.1.patch, HBASE-9708.1.patch


 The output for snapshots when you enter an invalid snapshot name talks about 
 User-space table names instead of Snapshot names. The error message 
 should say Snapshot names can only contain
 Here is an example of the output:
 {noformat}
 hbase(main):001:0 snapshot 'user', 'asdf asdf'
 ERROR: java.lang.IllegalArgumentException: Illegal character 32 at 4. 
 User-space table names can only contain 'word characters': i.e. 
 [a-zA-Z_0-9-.]: asdf asdf
 Here is some help for this command:
 Take a snapshot of specified table. Examples:
   hbase snapshot 'sourceTable', 'snapshotName'
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10622) Improve log and Exceptions in Export Snapshot

2014-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919366#comment-13919366
 ] 

Hudson commented on HBASE-10622:


FAILURE: Integrated in HBase-0.94-security #429 (See 
[https://builds.apache.org/job/HBase-0.94-security/429/])
HBASE-10622 Improve log and Exceptions in Export Snapshot (mbertozzi: rev 
1574034)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java


 Improve log and Exceptions in Export Snapshot 
 --

 Key: HBASE-10622
 URL: https://issues.apache.org/jira/browse/HBASE-10622
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-10622-v0.patch, HBASE-10622-v1.patch, 
 HBASE-10622-v2.patch, HBASE-10622-v3.patch, HBASE-10622-v4.patch


 from the logs of export snapshot is not really clear what's going on,
 adding some extra information useful to debug, and in some places the real 
 exception can be thrown



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10567) Add overwrite manifest option to ExportSnapshot

2014-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919394#comment-13919394
 ] 

Hudson commented on HBASE-10567:


FAILURE: Integrated in HBase-0.94-JDK7 #71 (See 
[https://builds.apache.org/job/HBase-0.94-JDK7/71/])
HBASE-10567 Add overwrite manifest option to ExportSnapshot (mbertozzi: rev 
1574017)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java


 Add overwrite manifest option to ExportSnapshot
 ---

 Key: HBASE-10567
 URL: https://issues.apache.org/jira/browse/HBASE-10567
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.98.0, 0.94.16, 0.99.0, 0.96.1.1
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.96.2, 0.98.1, 0.99.0

 Attachments: HBASE-10567-v0.patch, HBASE-10567-v1.patch


 If you want to export a snapshot twice (e.g. in case you accidentally removed 
 a file and now your snapshot is corrupted) you have to manually remove the 
 .hbase-snapshot/SNAPSHOT_NAME directory and then run the ExportSnapshot tool.
 Add an -overwrite option to this operation automatically.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9708) Improve Snapshot Name Error Message

2014-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919395#comment-13919395
 ] 

Hudson commented on HBASE-9708:
---

FAILURE: Integrated in HBase-0.94-JDK7 #71 (See 
[https://builds.apache.org/job/HBase-0.94-JDK7/71/])
HBASE-9708 Improve Snapshot Name Error Message (Esteban Gutierrez) (mbertozzi: 
rev 1573962)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotDescriptionUtils.java


 Improve Snapshot Name Error Message
 ---

 Key: HBASE-9708
 URL: https://issues.apache.org/jira/browse/HBASE-9708
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.94.2
Reporter: Jesse Anderson
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-9708.1.patch, HBASE-9708.1.patch


 The output for snapshots when you enter an invalid snapshot name talks about 
 User-space table names instead of Snapshot names. The error message 
 should say Snapshot names can only contain
 Here is an example of the output:
 {noformat}
 hbase(main):001:0 snapshot 'user', 'asdf asdf'
 ERROR: java.lang.IllegalArgumentException: Illegal character 32 at 4. 
 User-space table names can only contain 'word characters': i.e. 
 [a-zA-Z_0-9-.]: asdf asdf
 Here is some help for this command:
 Take a snapshot of specified table. Examples:
   hbase snapshot 'sourceTable', 'snapshotName'
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10537) Let the ExportSnapshot mapper fail and retry on error

2014-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919396#comment-13919396
 ] 

Hudson commented on HBASE-10537:


FAILURE: Integrated in HBase-0.94-JDK7 #71 (See 
[https://builds.apache.org/job/HBase-0.94-JDK7/71/])
HBASE-10537 Let the ExportSnapshot mapper fail and retry on error (mbertozzi: 
rev 1574016)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java


 Let the ExportSnapshot mapper fail and retry on error
 -

 Key: HBASE-10537
 URL: https://issues.apache.org/jira/browse/HBASE-10537
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.96.2, 0.98.1, 0.99.0

 Attachments: HBASE-10537-v1.patch, HBASE-10537-v2.patch


 Instead of completing the job, and force the user to re-run the export if 
 something failed, let the Mapper fail and retry automatically based on the 
 job.getMaxMapAttempts()



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10622) Improve log and Exceptions in Export Snapshot

2014-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919399#comment-13919399
 ] 

Hudson commented on HBASE-10622:


FAILURE: Integrated in HBase-0.94-on-Hadoop-2 #39 (See 
[https://builds.apache.org/job/HBase-0.94-on-Hadoop-2/39/])
HBASE-10622 Improve log and Exceptions in Export Snapshot (mbertozzi: rev 
1574034)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java


 Improve log and Exceptions in Export Snapshot 
 --

 Key: HBASE-10622
 URL: https://issues.apache.org/jira/browse/HBASE-10622
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-10622-v0.patch, HBASE-10622-v1.patch, 
 HBASE-10622-v2.patch, HBASE-10622-v3.patch, HBASE-10622-v4.patch


 from the logs of export snapshot is not really clear what's going on,
 adding some extra information useful to debug, and in some places the real 
 exception can be thrown



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10664) TestImportExport runs too long

2014-03-04 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-10664:
--

 Summary: TestImportExport runs too long
 Key: HBASE-10664
 URL: https://issues.apache.org/jira/browse/HBASE-10664
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Andrew Purtell
 Fix For: 0.98.1, 0.99.0


Debugging with -Dsurefire.firstPartForkMode=always 
-Dsurefire.secondPartForkMode=always looking for a hanging test. 

388 seconds.

{noformat}
Forking command line: /bin/sh -c cd /data/src/hbase/hbase-server  
/usr/lib/jvm/java-1.7.0.45-oracle-amd64/jre/bin/java -enableassertions 
-Xmx1900m -XX:MaxPermSize=100m -Djava.security.egd=file:/dev/./urandom 
-Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -jar 
/data/src/hbase/hbase-server/target/surefire/surefirebooter7637958208277391169.jar
 /data/src/hbase/hbase-server/target/surefire/surefire6877889026110956843tmp 
/data/src/hbase/hbase-server/target/surefire/surefire_1907837210788480451831tmp
Running org.apache.hadoop.hbase.mapreduce.TestImportExport
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 388.246 sec
{noformat}

Slim down or break it up.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9355) HBaseTestingUtility#cleanupDataTestDirOnTestFS() doesn't close the FileSystem

2014-03-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919410#comment-13919410
 ] 

Hadoop QA commented on HBASE-9355:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12632494/HBASE-9355.1.patch
  against trunk revision .
  ATTACHMENT ID: 12632494

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.zookeeper.lock.TestZKInterProcessReadWriteLock

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8880//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8880//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8880//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8880//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8880//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8880//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8880//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8880//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8880//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8880//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8880//console

This message is automatically generated.

 HBaseTestingUtility#cleanupDataTestDirOnTestFS() doesn't close the FileSystem
 -

 Key: HBASE-9355
 URL: https://issues.apache.org/jira/browse/HBASE-9355
 Project: HBase
  Issue Type: Test
Affects Versions: 0.92.2
Reporter: Ted Yu
Priority: Minor
 Attachments: HBASE-9355.1.patch


 Here is related code:
 {code}
   public boolean cleanupDataTestDirOnTestFS() throws IOException {
 boolean ret = getTestFileSystem().delete(dataTestDirOnTestFS, true);
 if (ret)
   dataTestDirOnTestFS = null;
 return ret;
   }
 {code}
 The FileSystem returned by getTestFileSystem() is not closed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9999) Add support for small reverse scan

2014-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919414#comment-13919414
 ] 

Hudson commented on HBASE-:
---

FAILURE: Integrated in HBase-TRUNK #4976 (See 
[https://builds.apache.org/job/HBase-TRUNK/4976/])
HBASE- Add support for small reverse scan - with new files (nkeywal: rev 
1573951)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientSmallReversedScanner.java
HBASE- Add support for small reverse scan (nkeywal: rev 1573949)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientSmallScanner.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ReversedClientScanner.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ReversedScannerCallable.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java


 Add support for small reverse scan
 --

 Key: HBASE-
 URL: https://issues.apache.org/jira/browse/HBASE-
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: .v1.patch, .v2.patch, .v3.patch


 HBASE-4811 adds feature of reverse scan. This JIRA adds the support for small 
 reverse scan.
 This is activated when both 'reversed' and 'small' attributes are true in 
 Scan Object



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10622) Improve log and Exceptions in Export Snapshot

2014-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919416#comment-13919416
 ] 

Hudson commented on HBASE-10622:


FAILURE: Integrated in HBase-TRUNK #4976 (See 
[https://builds.apache.org/job/HBase-TRUNK/4976/])
HBASE-10622 Improve log and Exceptions in Export Snapshot (mbertozzi: rev 
1574031)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java


 Improve log and Exceptions in Export Snapshot 
 --

 Key: HBASE-10622
 URL: https://issues.apache.org/jira/browse/HBASE-10622
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-10622-v0.patch, HBASE-10622-v1.patch, 
 HBASE-10622-v2.patch, HBASE-10622-v3.patch, HBASE-10622-v4.patch


 from the logs of export snapshot is not really clear what's going on,
 adding some extra information useful to debug, and in some places the real 
 exception can be thrown



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9708) Improve Snapshot Name Error Message

2014-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919415#comment-13919415
 ] 

Hudson commented on HBASE-9708:
---

FAILURE: Integrated in HBase-TRUNK #4976 (See 
[https://builds.apache.org/job/HBase-TRUNK/4976/])
HBASE-9708 Improve Snapshot Name Error Message (Esteban Gutierrez) (mbertozzi: 
rev 1573947)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/ClientSnapshotDescriptionUtils.java
* /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/TableName.java


 Improve Snapshot Name Error Message
 ---

 Key: HBASE-9708
 URL: https://issues.apache.org/jira/browse/HBASE-9708
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.94.2
Reporter: Jesse Anderson
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-9708.1.patch, HBASE-9708.1.patch


 The output for snapshots when you enter an invalid snapshot name talks about 
 User-space table names instead of Snapshot names. The error message 
 should say Snapshot names can only contain
 Here is an example of the output:
 {noformat}
 hbase(main):001:0 snapshot 'user', 'asdf asdf'
 ERROR: java.lang.IllegalArgumentException: Illegal character 32 at 4. 
 User-space table names can only contain 'word characters': i.e. 
 [a-zA-Z_0-9-.]: asdf asdf
 Here is some help for this command:
 Take a snapshot of specified table. Examples:
   hbase snapshot 'sourceTable', 'snapshotName'
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10665) TestCompaction runs too long

2014-03-04 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-10665:
--

 Summary: TestCompaction runs too long
 Key: HBASE-10665
 URL: https://issues.apache.org/jira/browse/HBASE-10665
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Andrew Purtell
 Fix For: 0.98.1, 0.99.0


584 seconds

{noformat}
Forking command line: /bin/sh -c cd /data/src/hbase/hbase-server  
/usr/lib/jvm/java-1.7.0.45-oracle-amd64/jre/bin/java -enableassertions 
-Xmx1900m -XX:MaxPermSize=100m -Djava.security.egd=file:/dev/./urandom 
-Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -jar 
/data/src/hbase/hbase-server/target/surefire/surefirebooter5980733570856201818.jar
 /data/src/hbase/hbase-server/target/surefire/surefire4520171250819563114tmp 
/data/src/hbase/hbase-server/target/surefire/surefire_2794381603824180144412tmp
Running org.apache.hadoop.hbase.regionserver.TestCompaction
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 584.609 sec
{noformat}

Slim down or split up.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10665) TestCompaction and TestCompactionWithCoprocessor run too long

2014-03-04 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-10665:
---

Summary: TestCompaction and TestCompactionWithCoprocessor run too long  
(was: TestCompaction runs too long)

 TestCompaction and TestCompactionWithCoprocessor run too long
 -

 Key: HBASE-10665
 URL: https://issues.apache.org/jira/browse/HBASE-10665
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Andrew Purtell
 Fix For: 0.98.1, 0.99.0


 584 seconds
 {noformat}
 Forking command line: /bin/sh -c cd /data/src/hbase/hbase-server  
 /usr/lib/jvm/java-1.7.0.45-oracle-amd64/jre/bin/java -enableassertions 
 -Xmx1900m -XX:MaxPermSize=100m -Djava.security.egd=file:/dev/./urandom 
 -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -jar 
 /data/src/hbase/hbase-server/target/surefire/surefirebooter5980733570856201818.jar
  /data/src/hbase/hbase-server/target/surefire/surefire4520171250819563114tmp 
 /data/src/hbase/hbase-server/target/surefire/surefire_2794381603824180144412tmp
 Running org.apache.hadoop.hbase.regionserver.TestCompaction
 Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 584.609 sec
 {noformat}
 Slim down or split up.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10665) TestCompaction and TestCompactionWithCoprocessor run too long

2014-03-04 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-10665:
---

Description: 
584 seconds each

TestCompaction:

{noformat}
Forking command line: /bin/sh -c cd /data/src/hbase/hbase-server  
/usr/lib/jvm/java-1.7.0.45-oracle-amd64/jre/bin/java -enableassertions 
-Xmx1900m -XX:MaxPermSize=100m -Djava.security.egd=file:/dev/./urandom 
-Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -jar 
/data/src/hbase/hbase-server/target/surefire/surefirebooter5980733570856201818.jar
 /data/src/hbase/hbase-server/target/surefire/surefire4520171250819563114tmp 
/data/src/hbase/hbase-server/target/surefire/surefire_2794381603824180144412tmp
Running org.apache.hadoop.hbase.regionserver.TestCompaction
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 584.609 sec
{noformat}

TestCompactionWithCoprocessor:

{noformat}
Forking command line: /bin/sh -c cd /data/src/hbase/hbase-server  
/usr/lib/jvm/java-1.7.0.45-oracle-amd64/jre/bin/java -enableassertions 
-Xmx1900m -XX:MaxPermSize=100m -Djava.security.egd=file:/dev/./urandom 
-Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -jar 
/data/src/hbase/hbase-server/target/surefire/surefirebooter7194368346045889527.jar
 /data/src/hbase/hbase-server/target/surefire/surefire9025480282422315585tmp 
/data/src/hbase/hbase-server/target/surefire/surefire_2815590620956840351617tmp
Running org.apache.hadoop.hbase.regionserver.TestCompactionWithCoprocessor
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 584.399 sec
{noformat}

Slim down or split up.

  was:
584 seconds

{noformat}
Forking command line: /bin/sh -c cd /data/src/hbase/hbase-server  
/usr/lib/jvm/java-1.7.0.45-oracle-amd64/jre/bin/java -enableassertions 
-Xmx1900m -XX:MaxPermSize=100m -Djava.security.egd=file:/dev/./urandom 
-Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -jar 
/data/src/hbase/hbase-server/target/surefire/surefirebooter5980733570856201818.jar
 /data/src/hbase/hbase-server/target/surefire/surefire4520171250819563114tmp 
/data/src/hbase/hbase-server/target/surefire/surefire_2794381603824180144412tmp
Running org.apache.hadoop.hbase.regionserver.TestCompaction
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 584.609 sec
{noformat}

Slim down or split up.


 TestCompaction and TestCompactionWithCoprocessor run too long
 -

 Key: HBASE-10665
 URL: https://issues.apache.org/jira/browse/HBASE-10665
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Andrew Purtell
 Fix For: 0.98.1, 0.99.0


 584 seconds each
 TestCompaction:
 {noformat}
 Forking command line: /bin/sh -c cd /data/src/hbase/hbase-server  
 /usr/lib/jvm/java-1.7.0.45-oracle-amd64/jre/bin/java -enableassertions 
 -Xmx1900m -XX:MaxPermSize=100m -Djava.security.egd=file:/dev/./urandom 
 -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -jar 
 /data/src/hbase/hbase-server/target/surefire/surefirebooter5980733570856201818.jar
  /data/src/hbase/hbase-server/target/surefire/surefire4520171250819563114tmp 
 /data/src/hbase/hbase-server/target/surefire/surefire_2794381603824180144412tmp
 Running org.apache.hadoop.hbase.regionserver.TestCompaction
 Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 584.609 sec
 {noformat}
 TestCompactionWithCoprocessor:
 {noformat}
 Forking command line: /bin/sh -c cd /data/src/hbase/hbase-server  
 /usr/lib/jvm/java-1.7.0.45-oracle-amd64/jre/bin/java -enableassertions 
 -Xmx1900m -XX:MaxPermSize=100m -Djava.security.egd=file:/dev/./urandom 
 -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -jar 
 /data/src/hbase/hbase-server/target/surefire/surefirebooter7194368346045889527.jar
  /data/src/hbase/hbase-server/target/surefire/surefire9025480282422315585tmp 
 /data/src/hbase/hbase-server/target/surefire/surefire_2815590620956840351617tmp
 Running org.apache.hadoop.hbase.regionserver.TestCompactionWithCoprocessor
 Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 584.399 sec
 {noformat}
 Slim down or split up.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10637) rpcClient: Setup the iostreams when writing

2014-03-04 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10637:


Summary: rpcClient: Setup the iostreams when writing  (was: rpcClient: 
Setup the iostream when doing the write)

 rpcClient: Setup the iostreams when writing
 ---

 Key: HBASE-10637
 URL: https://issues.apache.org/jira/browse/HBASE-10637
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0, hbase-10070

 Attachments: 10637.v1.patch


 Since HBASE-10525, we can write in a different thread than the client. This 
 allows the client thread to be interrupted w/o any impact on the shared tcp 
 connection. We should setup the iostream on the second thread as well, i.e. 
 when we do the write, and not when we do the getConnection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10637) rpcClient: Setup the iostreams when writing

2014-03-04 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919451#comment-13919451
 ] 

Nicolas Liochon commented on HBASE-10637:
-

Committed to trunk, thanks for the review, Stack  Devaraj.

 rpcClient: Setup the iostreams when writing
 ---

 Key: HBASE-10637
 URL: https://issues.apache.org/jira/browse/HBASE-10637
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0, hbase-10070

 Attachments: 10637.v1.patch


 Since HBASE-10525, we can write in a different thread than the client. This 
 allows the client thread to be interrupted w/o any impact on the shared tcp 
 connection. We should setup the iostream on the second thread as well, i.e. 
 when we do the write, and not when we do the getConnection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-10637) rpcClient: Setup the iostreams when writing

2014-03-04 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon resolved HBASE-10637.
-

   Resolution: Fixed
Fix Version/s: (was: hbase-10070)
 Hadoop Flags: Reviewed

 rpcClient: Setup the iostreams when writing
 ---

 Key: HBASE-10637
 URL: https://issues.apache.org/jira/browse/HBASE-10637
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10637.v1.patch


 Since HBASE-10525, we can write in a different thread than the client. This 
 allows the client thread to be interrupted w/o any impact on the shared tcp 
 connection. We should setup the iostream on the second thread as well, i.e. 
 when we do the write, and not when we do the getConnection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10665) TestCompaction and TestCompactionWithCoprocessor run too long

2014-03-04 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919465#comment-13919465
 ] 

Andrew Purtell commented on HBASE-10665:


I also want to sample the running time of this test stepping back in commit 
history.

 TestCompaction and TestCompactionWithCoprocessor run too long
 -

 Key: HBASE-10665
 URL: https://issues.apache.org/jira/browse/HBASE-10665
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Andrew Purtell
 Fix For: 0.98.1, 0.99.0


 584 seconds each
 TestCompaction:
 {noformat}
 Forking command line: /bin/sh -c cd /data/src/hbase/hbase-server  
 /usr/lib/jvm/java-1.7.0.45-oracle-amd64/jre/bin/java -enableassertions 
 -Xmx1900m -XX:MaxPermSize=100m -Djava.security.egd=file:/dev/./urandom 
 -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -jar 
 /data/src/hbase/hbase-server/target/surefire/surefirebooter5980733570856201818.jar
  /data/src/hbase/hbase-server/target/surefire/surefire4520171250819563114tmp 
 /data/src/hbase/hbase-server/target/surefire/surefire_2794381603824180144412tmp
 Running org.apache.hadoop.hbase.regionserver.TestCompaction
 Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 584.609 sec
 {noformat}
 TestCompactionWithCoprocessor:
 {noformat}
 Forking command line: /bin/sh -c cd /data/src/hbase/hbase-server  
 /usr/lib/jvm/java-1.7.0.45-oracle-amd64/jre/bin/java -enableassertions 
 -Xmx1900m -XX:MaxPermSize=100m -Djava.security.egd=file:/dev/./urandom 
 -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -jar 
 /data/src/hbase/hbase-server/target/surefire/surefirebooter7194368346045889527.jar
  /data/src/hbase/hbase-server/target/surefire/surefire9025480282422315585tmp 
 /data/src/hbase/hbase-server/target/surefire/surefire_2815590620956840351617tmp
 Running org.apache.hadoop.hbase.regionserver.TestCompactionWithCoprocessor
 Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 584.399 sec
 {noformat}
 Slim down or split up.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10622) Improve log and Exceptions in Export Snapshot

2014-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919483#comment-13919483
 ] 

Hudson commented on HBASE-10622:


SUCCESS: Integrated in hbase-0.96 #326 (See 
[https://builds.apache.org/job/hbase-0.96/326/])
HBASE-10622 Improve log and Exceptions in Export Snapshot (mbertozzi: rev 
1574033)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java


 Improve log and Exceptions in Export Snapshot 
 --

 Key: HBASE-10622
 URL: https://issues.apache.org/jira/browse/HBASE-10622
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-10622-v0.patch, HBASE-10622-v1.patch, 
 HBASE-10622-v2.patch, HBASE-10622-v3.patch, HBASE-10622-v4.patch


 from the logs of export snapshot is not really clear what's going on,
 adding some extra information useful to debug, and in some places the real 
 exception can be thrown



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10666) TestMasterCoprocessorExceptionWithAbort hangs at shutdown

2014-03-04 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-10666:
--

 Summary: TestMasterCoprocessorExceptionWithAbort hangs at shutdown
 Key: HBASE-10666
 URL: https://issues.apache.org/jira/browse/HBASE-10666
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Andrew Purtell
 Fix For: 0.98.1, 0.99.0


Stacktrace from a run where TestMasterCoprocessorExceptionWithAbort is hung up 
in HBaseTestingUtility.shutdownMiniHBaseCluster at tear down time.

{noformat}
pool-1-thread-1 prio=10 tid=0x02446800 nid=0x3d31 waiting for monitor 
entry [0x7ffbdc412000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.closeMaster(HConnectionManager.java:2192)
- waiting to lock 0xda044e78 (a java.lang.Object)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.internalClose(HConnectionManager.java:2529)
at 
org.apache.hadoop.hbase.client.HConnectionManager.deleteConnection(HConnectionManager.java:490)
- locked 0xdc589170 (a 
org.apache.hadoop.hbase.client.HConnectionManager$1)
at 
org.apache.hadoop.hbase.client.HConnectionManager.deleteConnection(HConnectionManager.java:475)
- locked 0xdc589170 (a 
org.apache.hadoop.hbase.client.HConnectionManager$1)
at 
org.apache.hadoop.hbase.client.HConnectionManager.access$1900(HConnectionManager.java:199)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.close(HConnectionManager.java:2545)
at org.apache.hadoop.hbase.client.HBaseAdmin.close(HBaseAdmin.java:2381)
- locked 0xda036a60 (a 
org.apache.hadoop.hbase.HBaseTestingUtility$HBaseAdminForTests)
at 
org.apache.hadoop.hbase.HBaseTestingUtility$HBaseAdminForTests.close0(HBaseTestingUtility.java:2404)
- locked 0xda036a60 (a 
org.apache.hadoop.hbase.HBaseTestingUtility$HBaseAdminForTests)
at 
org.apache.hadoop.hbase.HBaseTestingUtility$HBaseAdminForTests.access$000(HBaseTestingUtility.java:2392)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniHBaseCluster(HBaseTestingUtility.java:961)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniCluster(HBaseTestingUtility.java:944)
at 
org.apache.hadoop.hbase.coprocessor.TestMasterCoprocessorExceptionWithAbort.teardownAfterClass(TestMasterCoprocessorExceptionWithAbort.java:152)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HBASE-10666) TestMasterCoprocessorExceptionWithAbort hangs at shutdown

2014-03-04 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell reassigned HBASE-10666:
--

Assignee: Andrew Purtell

 TestMasterCoprocessorExceptionWithAbort hangs at shutdown
 -

 Key: HBASE-10666
 URL: https://issues.apache.org/jira/browse/HBASE-10666
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 0.98.1, 0.99.0


 Stacktrace from a run where TestMasterCoprocessorExceptionWithAbort is hung 
 up in HBaseTestingUtility.shutdownMiniHBaseCluster at tear down time.
 {noformat}
 pool-1-thread-1 prio=10 tid=0x02446800 nid=0x3d31 waiting for 
 monitor 
 entry [0x7ffbdc412000]
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.closeMaster(HConnectionManager.java:2192)
 - waiting to lock 0xda044e78 (a java.lang.Object)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.internalClose(HConnectionManager.java:2529)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager.deleteConnection(HConnectionManager.java:490)
 - locked 0xdc589170 (a 
 org.apache.hadoop.hbase.client.HConnectionManager$1)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager.deleteConnection(HConnectionManager.java:475)
 - locked 0xdc589170 (a 
 org.apache.hadoop.hbase.client.HConnectionManager$1)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager.access$1900(HConnectionManager.java:199)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.close(HConnectionManager.java:2545)
 at 
 org.apache.hadoop.hbase.client.HBaseAdmin.close(HBaseAdmin.java:2381)
 - locked 0xda036a60 (a 
 org.apache.hadoop.hbase.HBaseTestingUtility$HBaseAdminForTests)
 at 
 org.apache.hadoop.hbase.HBaseTestingUtility$HBaseAdminForTests.close0(HBaseTestingUtility.java:2404)
 - locked 0xda036a60 (a 
 org.apache.hadoop.hbase.HBaseTestingUtility$HBaseAdminForTests)
 at 
 org.apache.hadoop.hbase.HBaseTestingUtility$HBaseAdminForTests.access$000(HBaseTestingUtility.java:2392)
 at 
 org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniHBaseCluster(HBaseTestingUtility.java:961)
 at 
 org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniCluster(HBaseTestingUtility.java:944)
 at 
 org.apache.hadoop.hbase.coprocessor.TestMasterCoprocessorExceptionWithAbort.teardownAfterClass(TestMasterCoprocessorExceptionWithAbort.java:152)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10624) Fix 2 new findbugs warnings

2014-03-04 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919491#comment-13919491
 ] 

Jean-Marc Spaggiari commented on HBASE-10624:
-

On 4 nodes cluster, with Hadoop 2.2.0. 80%tile. One client, non-mapred.

v2 = 24055 writes/seconds (Std Dev 1.8%)
v4 = 23899 writes/seconds (Std Dev 3.26%)

v2 is 0.65% faster than v4. Not that much of a difference.

 Fix 2 new findbugs warnings
 ---

 Key: HBASE-10624
 URL: https://issues.apache.org/jira/browse/HBASE-10624
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: 10624-v1.txt, 10624-v2.txt, 10624-v4.txt


 Inconsistent synchronization of 
 org.apache.hadoop.hbase.regionserver.TimeRangeTracker.maximumTimestamp; 
 locked 66% of time
 {code}
 In class org.apache.hadoop.hbase.regionserver.TimeRangeTracker
 Field org.apache.hadoop.hbase.regionserver.TimeRangeTracker.maximumTimestamp
 Synchronized 66% of the time
 {code}
 Inconsistent synchronization of 
 org.apache.hadoop.hbase.regionserver.TimeRangeTracker.minimumTimestamp; 
 locked 62% of time
 {code}
 In class org.apache.hadoop.hbase.regionserver.TimeRangeTracker
 Field org.apache.hadoop.hbase.regionserver.TimeRangeTracker.minimumTimestamp
 Synchronized 62% of the time
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10666) TestMasterCoprocessorExceptionWithAbort hangs at shutdown

2014-03-04 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919489#comment-13919489
 ] 

Andrew Purtell commented on HBASE-10666:


Might be easy enough to start a new master after triggering the abort of 
previous, so we can get a clean shutdown.

 TestMasterCoprocessorExceptionWithAbort hangs at shutdown
 -

 Key: HBASE-10666
 URL: https://issues.apache.org/jira/browse/HBASE-10666
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 0.98.1, 0.99.0


 Stacktrace from a run where TestMasterCoprocessorExceptionWithAbort is hung 
 up in HBaseTestingUtility.shutdownMiniHBaseCluster at tear down time.
 {noformat}
 pool-1-thread-1 prio=10 tid=0x02446800 nid=0x3d31 waiting for 
 monitor 
 entry [0x7ffbdc412000]
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.closeMaster(HConnectionManager.java:2192)
 - waiting to lock 0xda044e78 (a java.lang.Object)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.internalClose(HConnectionManager.java:2529)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager.deleteConnection(HConnectionManager.java:490)
 - locked 0xdc589170 (a 
 org.apache.hadoop.hbase.client.HConnectionManager$1)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager.deleteConnection(HConnectionManager.java:475)
 - locked 0xdc589170 (a 
 org.apache.hadoop.hbase.client.HConnectionManager$1)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager.access$1900(HConnectionManager.java:199)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.close(HConnectionManager.java:2545)
 at 
 org.apache.hadoop.hbase.client.HBaseAdmin.close(HBaseAdmin.java:2381)
 - locked 0xda036a60 (a 
 org.apache.hadoop.hbase.HBaseTestingUtility$HBaseAdminForTests)
 at 
 org.apache.hadoop.hbase.HBaseTestingUtility$HBaseAdminForTests.close0(HBaseTestingUtility.java:2404)
 - locked 0xda036a60 (a 
 org.apache.hadoop.hbase.HBaseTestingUtility$HBaseAdminForTests)
 at 
 org.apache.hadoop.hbase.HBaseTestingUtility$HBaseAdminForTests.access$000(HBaseTestingUtility.java:2392)
 at 
 org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniHBaseCluster(HBaseTestingUtility.java:961)
 at 
 org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniCluster(HBaseTestingUtility.java:944)
 at 
 org.apache.hadoop.hbase.coprocessor.TestMasterCoprocessorExceptionWithAbort.teardownAfterClass(TestMasterCoprocessorExceptionWithAbort.java:152)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10667) TestEncodedSeekers runs too long

2014-03-04 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-10667:
--

 Summary: TestEncodedSeekers runs too long
 Key: HBASE-10667
 URL: https://issues.apache.org/jira/browse/HBASE-10667
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Andrew Purtell
 Fix For: 0.98.1, 0.99.0


214 seconds, borderline

{noformat}
Forking command line: /bin/sh -c cd /data/src/hbase/hbase-server  
/usr/lib/jvm/java-1.7.0.45-oracle-amd64/jre/bin/java -enableassertions 
-Xmx1900m -XX:MaxPermSize=100m -Djava.security.egd=file:/dev/./urandom 
-Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -jar 
/data/src/hbase/hbase-server/target/surefire/surefirebooter309171614766381235.jar
 /data/src/hbase/hbase-server/target/surefire/surefire1759919541562435761tmp 
/data/src/hbase/hbase-server/target/surefire/surefire_274515185028609451271tmp
Running org.apache.hadoop.hbase.io.encoding.TestEncodedSeekers
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 214.105 sec
{noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10609) Remove filterKeyValue(Cell ignored) from FilterBase

2014-03-04 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10609:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

 Remove filterKeyValue(Cell ignored) from FilterBase
 ---

 Key: HBASE-10609
 URL: https://issues.apache.org/jira/browse/HBASE-10609
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.99.0

 Attachments: 10609-v1.txt


 FilterBase.java has been marked @InterfaceAudience.Private since 0.96
 You can find background in HBASE-10485: PrefixFilter#filterKeyValue() should 
 perform filtering on row key
 Dropping filterKeyValue(Cell ignored) would let developers make conscientious 
 decision on when ReturnCode.INCLUDE should be returned.
 Here is the thread on dev@ mailing list:
 http://search-hadoop.com/m/DHED4l8JBI1



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10622) Improve log and Exceptions in Export Snapshot

2014-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919503#comment-13919503
 ] 

Hudson commented on HBASE-10622:


FAILURE: Integrated in HBase-0.94 #1308 (See 
[https://builds.apache.org/job/HBase-0.94/1308/])
HBASE-10622 Improve log and Exceptions in Export Snapshot (mbertozzi: rev 
1574034)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java


 Improve log and Exceptions in Export Snapshot 
 --

 Key: HBASE-10622
 URL: https://issues.apache.org/jira/browse/HBASE-10622
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-10622-v0.patch, HBASE-10622-v1.patch, 
 HBASE-10622-v2.patch, HBASE-10622-v3.patch, HBASE-10622-v4.patch


 from the logs of export snapshot is not really clear what's going on,
 adding some extra information useful to debug, and in some places the real 
 exception can be thrown



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10668) TestExportSnapshot runs too long

2014-03-04 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-10668:
--

 Summary: TestExportSnapshot runs too long
 Key: HBASE-10668
 URL: https://issues.apache.org/jira/browse/HBASE-10668
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Andrew Purtell
 Fix For: 0.98.1, 0.99.0


332 seconds

{noformat}
Forking command line: /bin/sh -c cd /data/src/hbase/hbase-server  
/usr/lib/jvm/java-1.7.0.45-oracle-amd64/jre/bin/java -enableassertions 
-Xmx1900m -XX:MaxPermSize=100m -Djava.security.egd=file:/dev/./urandom 
-Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -jar 
/data/src/hbase/hbase-server/target/surefire/surefirebooter1668068702110669265.jar
 /data/src/hbase/hbase-server/target/surefire/surefire5744357307851892501tmp 
/data/src/hbase/hbase-server/target/surefire/surefire_3661340119563945183029tmp
Running org.apache.hadoop.hbase.snapshot.TestExportSnapshot
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 332.421 sec
{noformat}

Slim down or split up.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10622) Improve log and Exceptions in Export Snapshot

2014-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919522#comment-13919522
 ] 

Hudson commented on HBASE-10622:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #186 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/186/])
HBASE-10622 Improve log and Exceptions in Export Snapshot (mbertozzi: rev 
1574032)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java


 Improve log and Exceptions in Export Snapshot 
 --

 Key: HBASE-10622
 URL: https://issues.apache.org/jira/browse/HBASE-10622
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-10622-v0.patch, HBASE-10622-v1.patch, 
 HBASE-10622-v2.patch, HBASE-10622-v3.patch, HBASE-10622-v4.patch


 from the logs of export snapshot is not really clear what's going on,
 adding some extra information useful to debug, and in some places the real 
 exception can be thrown



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-8304) Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured without default port.

2014-03-04 Thread haosdent (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919527#comment-13919527
 ] 

haosdent commented on HBASE-8304:
-

And how to assigned this issue to me? I couldn't chage the value of HBase jira 
because I don't have permissions. Thank you very much.

 Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured 
 without default port.
 ---

 Key: HBASE-8304
 URL: https://issues.apache.org/jira/browse/HBASE-8304
 Project: HBase
  Issue Type: Bug
  Components: HFile, regionserver
Affects Versions: 0.94.5
Reporter: Raymond Liu
  Labels: bulkloader
 Attachments: HBASE-8304.patch


 When fs.default.name or fs.defaultFS in hadoop core-site.xml is configured as 
 hdfs://ip, and hbase.rootdir is configured as hdfs://ip:port/hbaserootdir 
 where port is the hdfs namenode's default port. the bulkload operation will 
 not remove the file in bulk output dir. Store::bulkLoadHfile will think 
 hdfs:://ip and hdfs:://ip:port as different filesystem and go with copy 
 approaching instead of rename.
 The root cause is that hbase master will rewrite fs.default.name/fs.defaultFS 
 according to hbase.rootdir when regionserver started, thus, dest fs uri from 
 the hregion will not matching src fs uri passed from client.
 any suggestion what is the best approaching to fix this issue? 
 I kind of think that we could check for default port if src uri come without 
 port info.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10622) Improve log and Exceptions in Export Snapshot

2014-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919525#comment-13919525
 ] 

Hudson commented on HBASE-10622:


FAILURE: Integrated in HBase-0.94-JDK7 #72 (See 
[https://builds.apache.org/job/HBase-0.94-JDK7/72/])
HBASE-10622 Improve log and Exceptions in Export Snapshot (mbertozzi: rev 
1574034)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java


 Improve log and Exceptions in Export Snapshot 
 --

 Key: HBASE-10622
 URL: https://issues.apache.org/jira/browse/HBASE-10622
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: HBASE-10622-v0.patch, HBASE-10622-v1.patch, 
 HBASE-10622-v2.patch, HBASE-10622-v3.patch, HBASE-10622-v4.patch


 from the logs of export snapshot is not really clear what's going on,
 adding some extra information useful to debug, and in some places the real 
 exception can be thrown



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-8304) Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured without default port.

2014-03-04 Thread haosdent (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

haosdent updated HBASE-8304:


Attachment: HBASE-8304-v2.patch

Fix line length error.

 Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured 
 without default port.
 ---

 Key: HBASE-8304
 URL: https://issues.apache.org/jira/browse/HBASE-8304
 Project: HBase
  Issue Type: Bug
  Components: HFile, regionserver
Affects Versions: 0.94.5
Reporter: Raymond Liu
  Labels: bulkloader
 Attachments: HBASE-8304-v2.patch, HBASE-8304.patch


 When fs.default.name or fs.defaultFS in hadoop core-site.xml is configured as 
 hdfs://ip, and hbase.rootdir is configured as hdfs://ip:port/hbaserootdir 
 where port is the hdfs namenode's default port. the bulkload operation will 
 not remove the file in bulk output dir. Store::bulkLoadHfile will think 
 hdfs:://ip and hdfs:://ip:port as different filesystem and go with copy 
 approaching instead of rename.
 The root cause is that hbase master will rewrite fs.default.name/fs.defaultFS 
 according to hbase.rootdir when regionserver started, thus, dest fs uri from 
 the hregion will not matching src fs uri passed from client.
 any suggestion what is the best approaching to fix this issue? 
 I kind of think that we could check for default port if src uri come without 
 port info.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10669) [hbck tool] Usage is wrong for hbck tool for -sidelineCorruptHfiles option

2014-03-04 Thread Deepak Sharma (JIRA)
Deepak Sharma created HBASE-10669:
-

 Summary: [hbck tool] Usage is wrong for hbck tool for 
-sidelineCorruptHfiles option
 Key: HBASE-10669
 URL: https://issues.apache.org/jira/browse/HBASE-10669
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.96.1.1, 0.96.2, 0.98.1, 0.99.0
Reporter: Deepak Sharma
Priority: Minor
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.96.1.1


Usage is wrong for hbck tool for -sidelineCorruptHfiles option: 

it is like:
-sidelineCorruptHfiles  Quarantine corrupted HFiles.  implies 
-checkCorruptHfiles

here in sidelineCorruptHfiles and checkCorruptHfiles small 'f' is used but 
actually in code it is like 

  else if (cmd.equals(-checkCorruptHFiles)) {
checkCorruptHFiles = true;
  } else if (cmd.equals(-sidelineCorruptHFiles)) {
sidelineCorruptHFiles = true;
  }

so if we use sidelineCorruptHfiles option for hbck then it will give error 

Unrecognized option:-sidelineCorruptHfiles





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10669) [hbck tool] Usage is wrong for hbck tool for -sidelineCorruptHfiles option

2014-03-04 Thread Deepak Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13919553#comment-13919553
 ] 

Deepak Sharma commented on HBASE-10669:
---

added patch please check

 [hbck tool] Usage is wrong for hbck tool for -sidelineCorruptHfiles option
 --

 Key: HBASE-10669
 URL: https://issues.apache.org/jira/browse/HBASE-10669
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.96.2, 0.98.1, 0.99.0, 0.96.1.1
Reporter: Deepak Sharma
Priority: Minor
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.96.1.1

 Attachments: Hbck_usage_issue.patch


 Usage is wrong for hbck tool for -sidelineCorruptHfiles option: 
 it is like:
 -sidelineCorruptHfiles  Quarantine corrupted HFiles.  implies 
 -checkCorruptHfiles
 here in sidelineCorruptHfiles and checkCorruptHfiles small 'f' is used 
 but actually in code it is like 
   else if (cmd.equals(-checkCorruptHFiles)) {
 checkCorruptHFiles = true;
   } else if (cmd.equals(-sidelineCorruptHFiles)) {
 sidelineCorruptHFiles = true;
   }
 so if we use sidelineCorruptHfiles option for hbck then it will give error 
 Unrecognized option:-sidelineCorruptHfiles



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10669) [hbck tool] Usage is wrong for hbck tool for -sidelineCorruptHfiles option

2014-03-04 Thread Deepak Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Sharma updated HBASE-10669:
--

Status: Patch Available  (was: Open)

 [hbck tool] Usage is wrong for hbck tool for -sidelineCorruptHfiles option
 --

 Key: HBASE-10669
 URL: https://issues.apache.org/jira/browse/HBASE-10669
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.96.1.1, 0.96.2, 0.98.1, 0.99.0
Reporter: Deepak Sharma
Priority: Minor
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.96.1.1

 Attachments: Hbck_usage_issue.patch


 Usage is wrong for hbck tool for -sidelineCorruptHfiles option: 
 it is like:
 -sidelineCorruptHfiles  Quarantine corrupted HFiles.  implies 
 -checkCorruptHfiles
 here in sidelineCorruptHfiles and checkCorruptHfiles small 'f' is used 
 but actually in code it is like 
   else if (cmd.equals(-checkCorruptHFiles)) {
 checkCorruptHFiles = true;
   } else if (cmd.equals(-sidelineCorruptHFiles)) {
 sidelineCorruptHFiles = true;
   }
 so if we use sidelineCorruptHfiles option for hbck then it will give error 
 Unrecognized option:-sidelineCorruptHfiles



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10669) [hbck tool] Usage is wrong for hbck tool for -sidelineCorruptHfiles option

2014-03-04 Thread Deepak Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Sharma updated HBASE-10669:
--

Attachment: Hbck_usage_issue.patch

 [hbck tool] Usage is wrong for hbck tool for -sidelineCorruptHfiles option
 --

 Key: HBASE-10669
 URL: https://issues.apache.org/jira/browse/HBASE-10669
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.96.2, 0.98.1, 0.99.0, 0.96.1.1
Reporter: Deepak Sharma
Priority: Minor
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.96.1.1

 Attachments: Hbck_usage_issue.patch


 Usage is wrong for hbck tool for -sidelineCorruptHfiles option: 
 it is like:
 -sidelineCorruptHfiles  Quarantine corrupted HFiles.  implies 
 -checkCorruptHfiles
 here in sidelineCorruptHfiles and checkCorruptHfiles small 'f' is used 
 but actually in code it is like 
   else if (cmd.equals(-checkCorruptHFiles)) {
 checkCorruptHFiles = true;
   } else if (cmd.equals(-sidelineCorruptHFiles)) {
 sidelineCorruptHFiles = true;
   }
 so if we use sidelineCorruptHfiles option for hbck then it will give error 
 Unrecognized option:-sidelineCorruptHfiles



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10669) [hbck tool] Usage is wrong for hbck tool for -sidelineCorruptHfiles option

2014-03-04 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-10669:
---

Fix Version/s: (was: 0.96.1.1)

 [hbck tool] Usage is wrong for hbck tool for -sidelineCorruptHfiles option
 --

 Key: HBASE-10669
 URL: https://issues.apache.org/jira/browse/HBASE-10669
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.96.2, 0.98.1, 0.99.0, 0.96.1.1
Reporter: Deepak Sharma
Priority: Minor
 Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18

 Attachments: Hbck_usage_issue.patch


 Usage is wrong for hbck tool for -sidelineCorruptHfiles option: 
 it is like:
 -sidelineCorruptHfiles  Quarantine corrupted HFiles.  implies 
 -checkCorruptHfiles
 here in sidelineCorruptHfiles and checkCorruptHfiles small 'f' is used 
 but actually in code it is like 
   else if (cmd.equals(-checkCorruptHFiles)) {
 checkCorruptHFiles = true;
   } else if (cmd.equals(-sidelineCorruptHFiles)) {
 sidelineCorruptHFiles = true;
   }
 so if we use sidelineCorruptHfiles option for hbck then it will give error 
 Unrecognized option:-sidelineCorruptHfiles



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   3   >