[jira] Commented: (HBASE-1845) MultiGet, MultiDelete, and MultiPut - batched to the appropriate region servers

2010-06-07 Thread HBase Review Board (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876134#action_12876134
 ] 

HBase Review Board commented on HBASE-1845:
---

Message from: Marc Limotte mslimo...@gmail.com

---
This is an automatically generated e-mail. To reply, visit:
http://review.hbase.org/r/151/
---

Review request for hbase.


Summary
---

Updated the patch to work on the trunk.  New unit tests are in 
TestMultiParallel.java, which replaces TestMultiParallelPut.java.


This addresses bug HBASE-1845.


Diffs
-

  
http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/client/Action.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/client/Get.java
 951973 
  
http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/client/HConnection.java
 951973 
  
http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
 951973 
  
http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/client/HTable.java
 951973 
  
http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/client/MultiAction.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/client/MultiPut.java
 951973 
  
http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/client/MultiPutResponse.java
 951973 
  
http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/client/MultiResponse.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/client/Row.java
 951973 
  
http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/HbaseObjectWritable.java
 951973 
  
http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/HRegionInterface.java
 951973 
  
http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 951973 
  
http://svn.apache.org/repos/asf/hbase/trunk/src/test/java/org/apache/hadoop/hbase/TestMultiParallel.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/hbase/trunk/src/test/java/org/apache/hadoop/hbase/TestMultiParallelPut.java
 951973 

Diff: http://review.hbase.org/r/151/diff


Testing
---


Thanks,

Marc




 MultiGet, MultiDelete, and MultiPut - batched to the appropriate region 
 servers
 ---

 Key: HBASE-1845
 URL: https://issues.apache.org/jira/browse/HBASE-1845
 Project: HBase
  Issue Type: New Feature
Reporter: Erik Holstad
 Fix For: 0.21.0

 Attachments: batch.patch, hbase-1845_0.20.3.patch, 
 hbase-1845_0.20.5.patch, multi-v1.patch


 I've started to create a general interface for doing these batch/multi calls 
 and would like to get some input and thoughts about how we should handle this 
 and what the protocol should
 look like. 
 First naive patch, coming soon.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HBASE-2677) REST interface improvements

2010-06-07 Thread Andrew Purtell (JIRA)
REST interface improvements
---

 Key: HBASE-2677
 URL: https://issues.apache.org/jira/browse/HBASE-2677
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Assignee: Andrew Purtell


Umbrella issue for REST representation improvements.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-2557) [stargate] Avro serialization support

2010-06-07 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-2557:
--

Parent: HBASE-2677
Issue Type: Sub-task  (was: Improvement)

 [stargate] Avro serialization support
 -

 Key: HBASE-2557
 URL: https://issues.apache.org/jira/browse/HBASE-2557
 Project: HBase
  Issue Type: Sub-task
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor

 Do with Avro like Stargate does with protobufs.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HBASE-2679) new XML representation

2010-06-07 Thread Andrew Purtell (JIRA)
new XML representation
--

 Key: HBASE-2679
 URL: https://issues.apache.org/jira/browse/HBASE-2679
 Project: HBase
  Issue Type: Sub-task
Reporter: Andrew Purtell
Assignee: Andrew Purtell




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-2679) [stargate] new XML representation

2010-06-07 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-2679:
--

Summary: [stargate] new XML representation  (was: new XML 
representation)
Description: Provide a new XML representation that is a little cleaner, and 
where all occurrences of _cell_ are replaced with _value_.
Component/s: rest

 [stargate] new XML representation
 -

 Key: HBASE-2679
 URL: https://issues.apache.org/jira/browse/HBASE-2679
 Project: HBase
  Issue Type: Sub-task
  Components: rest
Reporter: Andrew Purtell
Assignee: Andrew Purtell

 Provide a new XML representation that is a little cleaner, and where all 
 occurrences of _cell_ are replaced with _value_.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-50) Snapshot of table

2010-06-07 Thread Li Chongxin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-50?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876259#action_12876259
 ] 

Li Chongxin commented on HBASE-50:
--

@Stack, Thanks for the comments. Here are some replies and questions for the 
comments.

.bq + I don't think you should take on requirement 1), only the hbase admin can 
create a snapshot. There is no authentication/access control in hbase currently 
- its coming but not here yet - and without it, this would be hard for you to 
enforce.

I think I didn't state it properly. I know access control is not included in 
hbase currently. What I mean here is, snapshot should be put in class 
HBaseAdmin instead of HTable. Client side operations being divided into these 
two classes is also for the consideration of access control which is provided 
in the future, isn't it?

.bq + Regards requirement 2., I'd suggest that how the snapshot gets copied out 
from under hbase should also be outside the scope of your work. I'd say your 
work is making a viable snapshot that can be copied with perhaps some tests to 
prove it works - that might copy off data - but in general, i'd say how actual 
copying is done is outside of the scope of this issue. 

Strictly, requirement 2 is not about how snapshot is copied out from under 
hbase. Actually, table data is not really copied when snapshot in current 
design. To make it fast, snapshot just captures the state of the table 
especially all the table files. So for requirement 2, just make sure the table 
data (hfiles indeed) are not mutated when snapshot.

bq. + How you going to ensure tabke is in 'good status'. Can you not snapshot 
it whatever its state? All regions being on line is a requirement?

Regarding tables that are disabled, all regions being on line should not be a 
requirement. As for 'good status', what I'm thinking is a table region could be 
in PENDING_OPEN or PENDING_CLOSE state, in which it might be half opened. I'm 
not sure wether RS or the master should take on the responsibility to perform 
the snapshot at this time. On the other side, if the table is completely opened 
or closed, snapshot can be taken by RS or the master.

bq. + FYI, wal logs are now archived, not deleted. Replication needs them. 
Replication might also be managing clean up of the archives (j-d, whats the 
story here?) If an outstanding snapshot, one that has not been deleted, then 
none of its wals should be removed.

Great. In current design, WAL log files are the only data files that are really 
copied. If they are now archived instead of deleted, we can create log files 
reference just as hfiles instead of copying the actual data. This will further 
shorten the snapshot time. Another LogCleanerDelegate, say 
ReferencedLogCleaner, could be created to check whether the log file should be 
deleted for the consideration of snapshot. What do you think?

bq. + I can say 'snapshot' all tables? Can I say 'snapshot catalog tables - 
meta and root tables?'

I think snapshot for meta works fine but snapshot for root table is a little 
tricky. When the snapshot is performed for a user table, .META. is updated to 
keep track of the file references. If a .META. table is snapshot, -ROOT- can be 
update to keep track of the file references. But where to keep the file 
references for -ROOT- table(region) if it is snapshot, still in -ROOT-? Should 
these newly updated file references information also be included in the 
snapshot?

bq. + If a RS fails between 'ready' and 'finish', does this mean we abandon the 
snapshot?

Yes. If a RS fails between 'ready' and 'finish', it should notify the client or 
master, whichever orchestrates, then the client or the master will send a 
signal to stop the snapshot on all RS via ZK. Something like this.

bq. + I'd say if RS is not ready for snapshot, just fail it. Something is badly 
wrong is a RS can't snapshot.

Currently, there is a timeout for snapshot ready. If a RS is ready, it'll wait 
for all the RS to be ready. Then the snapshot starts on all RS. Otherwise, the 
ready RS timeout and snapshot does not start on any RS. It's a synchronous way. 
Do you think this is appropriate? Will it create too much load to perform 
snapshot concurrently on the RS? (Jonathan perfer an asynchronous method)

bq. + Would it make sense for there to be a state between ready and finish and 
the data in this intermediate state would be the RS's progress?

Do you mean a znode is create for each RS to keep the progress? Then how do you 
define the RS's progress? What data will be kept in this znode?

Thanks again for the comments. I will update the design document based on them.


 Snapshot of table
 -

 Key: HBASE-50
 URL: https://issues.apache.org/jira/browse/HBASE-50
 Project: HBase
  Issue Type: New Feature
Reporter: Billy Pearson
Assignee: Li Chongxin
  

[jira] Commented: (HBASE-2397) Bytes.toStringBinary escapes printable chars

2010-06-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876292#action_12876292
 ] 

stack commented on HBASE-2397:
--

.bq I also debated if such a minor code change needed time for review. For 
that, I may have been wrong. From now on I'll put up even one liners on review 
board and wait for an ack before doing anything.

Please do not go to such an extreme Andrew.  IMO, you did nothing 'wrong' going 
ahead and committing a small fix for an issue that has been hanging out there 
for a good month and more w/o a response to your query (I was of your opinion 
until Ryan explained the why -- see my remark of April 7th above).  I also 
agree w/ your caution flagging the item as incompatible change.

@Ryan, please make an issue detailing what of the above to revert.

 Bytes.toStringBinary escapes printable chars
 

 Key: HBASE-2397
 URL: https://issues.apache.org/jira/browse/HBASE-2397
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 0.20.6, 0.21.0

 Attachments: HBASE-2397.patch


 Bytes.toStringBinary hex-escapes printable chars such as '@', '$', '#', '%', 
 '', '*', '(', ')', '{', '}', '[', ']', ';', ',', '~', '|'. Why?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HBASE-2681) Graceful fallback to NONE when the native compression algo is not found

2010-06-07 Thread Michele Catasta (JIRA)
Graceful fallback to NONE when the native compression algo is not found
---

 Key: HBASE-2681
 URL: https://issues.apache.org/jira/browse/HBASE-2681
 Project: HBase
  Issue Type: Improvement
  Components: io
Reporter: Michele Catasta
Priority: Minor
 Fix For: 0.21.0


stack's word: Fellas have complained about the way broke lzo manifests itself. 
HBase will actually take on writes. Its only when it goes to flush that it 
drops the edits and in a way that is essentially hidden to the client - 
exceptions are thrown in the regionserver log. So, i'd say, make another issue 
if you don't mind but its not for you to fix, not unless you are inclined. It'd 
be about better user experience around choosing a compression that is not 
supported or not properly installed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-2681) Graceful fallback to NONE when the native compression algo is not found

2010-06-07 Thread Michele Catasta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michele Catasta updated HBASE-2681:
---

Attachment: HBASE-2681.patch

Extracted a method that tries to instantiate a native codec and, if not found, 
just writes a WARN in the logs instead of throwing a RuntimeException. Trivial 
test included.

 Graceful fallback to NONE when the native compression algo is not found
 ---

 Key: HBASE-2681
 URL: https://issues.apache.org/jira/browse/HBASE-2681
 Project: HBase
  Issue Type: Improvement
  Components: io
Reporter: Michele Catasta
Priority: Minor
 Fix For: 0.21.0

 Attachments: HBASE-2681.patch


 stack's word: Fellas have complained about the way broke lzo manifests 
 itself. HBase will actually take on writes. Its only when it goes to flush 
 that it drops the edits and in a way that is essentially hidden to the client 
 - exceptions are thrown in the regionserver log. So, i'd say, make another 
 issue if you don't mind but its not for you to fix, not unless you are 
 inclined. It'd be about better user experience around choosing a compression 
 that is not supported or not properly installed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2616) TestHRegion.testWritesWhileGetting flaky on trunk

2010-06-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876321#action_12876321
 ] 

stack commented on HBASE-2616:
--

Builds still failing on this test: See #90, #91 and #94 builds.

 TestHRegion.testWritesWhileGetting flaky on trunk
 -

 Key: HBASE-2616
 URL: https://issues.apache.org/jira/browse/HBASE-2616
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Todd Lipcon
Assignee: ryan rawson
Priority: Critical
 Fix For: 0.20.5

 Attachments: HBASE-2616.patch


 Saw this failure on my internal hudson:
 junit.framework.AssertionFailedError: expected:\x00\x00\x00\x96 but 
 was:\x00\x00\x01\x00
   at 
 org.apache.hadoop.hbase.HBaseTestCase.assertEquals(HBaseTestCase.java:684)
   at 
 org.apache.hadoop.hbase.regionserver.TestHRegion.testWritesWhileGetting(TestHRegion.java:2334)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-50) Snapshot of table

2010-06-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-50?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876352#action_12876352
 ] 

stack commented on HBASE-50:


On, 5 Snapshot Creation

.bq Because this table region must be online, dumping the HRegionInfo of the 
region to a file .regioninfo under the snapshot directory of this region will 
obtain the metadata.

...The above is wrong, right?  We can snapshot online tables?

+1 on reading .META. data, flushing it to .regioninfo to be sure you have 
latest, and then copying that (Or, instead, you could ensure that on any 
transistion, the .regioninfo is updated.  If this is happening, no need to do 
extra flush of .META. at snapshot time.  This latter would be better IMO).

So, do you foresee your restore-from-snapshot running split over the logs as 
part of the restore?  That makes sense to me.

Why you think we need a Reference to the hfile?  Why not just a file that lists 
the names of all the hfiles?  We don't need to execute the snapshot, do we?  
Restoring from a snapshot would be a bunch of file renames and wal splitting?  
Or what are you thinking?  (Oh, maybe I'll find out when I read chapter 6).

.bq can be created just by the master.

Lets not have the master run the snapshot... let the client run it?

Shall we name the new .META. column family snapshot rather than reference?

I like this idea of keeping region snapshot and reference counting beside the 
region up in .META.

On the filename '.deleted', I think it a mistake to give it a '.' prefix 
especially given its in the snapshot dir (the snapshot dir probably needs to be 
prefixed with a character illegal in tablenames such as a '.' so its not taken 
for a table directory).


Regards 'Not sure whether there will be a name collision under this .deleted 
directory', j-d has done work to ensure WALs are uniquely named.  Storefiles 
are given a random-id.  We should probably do the extra work to ensure they are 
for sure unique... give them a UUID or something to we don't ever clash.

After reading chapter 6, I fail to see why we should keep References to files.  
Maybe I'm missing something.

.bq Not decides where to keep all the snapshots information, in a meta file 
under snapshot directory

Do you need a new catalog table called snapshots to keep list of snapshots, of 
what a snapshot comprises and some other metadata such as when it was made, 
whether it succeeded, who did it and why?

On the other hand, a directory in hdfs of files per snapshot will be more 
robust.

Section 7.4 is missing split of WAL files.  Perhaps this can be done in a MR 
job?

Design looks excellent Li.


 Snapshot of table
 -

 Key: HBASE-50
 URL: https://issues.apache.org/jira/browse/HBASE-50
 Project: HBase
  Issue Type: New Feature
Reporter: Billy Pearson
Assignee: Li Chongxin
Priority: Minor
 Attachments: HBase Snapshot Design Report V2.pdf, snapshot-src.zip


 Havening an option to take a snapshot of a table would be vary useful in 
 production.
 What I would like to see this option do is do a merge of all the data into 
 one or more files stored in the same folder on the dfs. This way we could 
 save data in case of a software bug in hadoop or user code. 
 The other advantage would be to be able to export a table to multi locations. 
 Say I had a read_only table that must be online. I could take a snapshot of 
 it when needed and export it to a separate data center and have it loaded 
 there and then i would have it online at multi data centers for load 
 balancing and failover.
 I understand that hadoop takes the need out of havening backup to protect 
 from failed servers, but this does not protect use from software bugs that 
 might delete or alter data in ways we did not plan. We should have a way we 
 can roll back a dataset.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2678) version the REST interface

2010-06-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876357#action_12876357
 ] 

stack commented on HBASE-2678:
--

Is the 'vnd.' prefix like 'x-'?  I seem to be missing how this versions the 
interface.  Where is the version number in the above?  Thanks.

 version the REST interface
 --

 Key: HBASE-2678
 URL: https://issues.apache.org/jira/browse/HBASE-2678
 Project: HBase
  Issue Type: Sub-task
Reporter: Andrew Purtell
Assignee: Andrew Purtell

 As mentioned in HBASE-2563, we should deprecate all uses of _cell_ in the API 
 and replace with new APIs that use _value_ instead. So we need a way to 
 version the REST interface, to provide an updated API while maintaining 
 access to the deprecated one until the next major revision. However, 
 something like this I consider wrong:
 {{/path/to/v1/resource}}
 and also:
 {{/v2/path/to/resource}}
 because the resource is the same regardless of the representation change. 
 REST makes a distinction between the representation and the resource using 
 media types. 
 Currently Stargate supports the following encodings:
 * {{text/plain}}  (in some cases)
 * binary: {{application/octet-stream}} (freeform, no schema change needed)
 * XML: {{text/xml}}
 * JSON: {{application/json}}
 * protobufs: {{application/x-protobuf}}
 We can add Avro encoding support in HBASE-2557 with the new representation as 
 {{application/x-avro-binary}} immediately. For XML, JSON, and protobuf 
 encoding, we can support new representations using the following new media 
 types in the current version:
 * XML: {{application/vnd.hbase+xml}}
 * JSON: {{application/vnd.hbase+json}}
 * protobufs: {{application/vnd.hbase+protobuf}}
 * and for sake of consistency: {{application/vnd.hbase+avro}}
 and then in the next major version recognize both MIME types but return the 
 same (newer) representation. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2678) version the REST interface

2010-06-07 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876361#action_12876361
 ] 

Andrew Purtell commented on HBASE-2678:
---

bq. Is the 'vnd.' prefix like 'x-'?  

vnd.foo+type is a common convention for vendor specific content types, like x-. 

bq. I seem to be missing how this versions the interface.  Where is the version 
number in the above?

It's not a version per se, but an alternate and newer representation type. So 
the current set of media types will correspond to version 1 and the new media 
types will correspond to version 2. At the next major release, support for 
both sets of types will collapse to just version 2. If we ever have to do 
this again, we can use the vnd.foo+type media types to select version 3 etc.

But anyway we can actually provide numeric versioning like

{{vnd.hbase.vN+type}}

and so why not if that would possibly reduce confusion.



  




 version the REST interface
 --

 Key: HBASE-2678
 URL: https://issues.apache.org/jira/browse/HBASE-2678
 Project: HBase
  Issue Type: Sub-task
Reporter: Andrew Purtell
Assignee: Andrew Purtell

 As mentioned in HBASE-2563, we should deprecate all uses of _cell_ in the API 
 and replace with new APIs that use _value_ instead. So we need a way to 
 version the REST interface, to provide an updated API while maintaining 
 access to the deprecated one until the next major revision. However, 
 something like this I consider wrong:
 {{/path/to/v1/resource}}
 and also:
 {{/v2/path/to/resource}}
 because the resource is the same regardless of the representation change. 
 REST makes a distinction between the representation and the resource using 
 media types. 
 Currently Stargate supports the following encodings:
 * {{text/plain}}  (in some cases)
 * binary: {{application/octet-stream}} (freeform, no schema change needed)
 * XML: {{text/xml}}
 * JSON: {{application/json}}
 * protobufs: {{application/x-protobuf}}
 We can add Avro encoding support in HBASE-2557 with the new representation as 
 {{application/x-avro-binary}} immediately. For XML, JSON, and protobuf 
 encoding, we can support new representations using the following new media 
 types in the current version:
 * XML: {{application/vnd.hbase+xml}}
 * JSON: {{application/vnd.hbase+json}}
 * protobufs: {{application/vnd.hbase+protobuf}}
 * and for sake of consistency: {{application/vnd.hbase+avro}}
 and then in the next major version recognize both MIME types but return the 
 same (newer) representation. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2682) build error: hbase-core has wrong packaging: jar. Must be 'pom'.

2010-06-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876362#action_12876362
 ] 

stack commented on HBASE-2682:
--

User is using wrong id I believe; they are using hbase-core rather than hbase 
(there used to be a hbase-core submodule but was removed a good while back). 

 build error:  hbase-core has wrong packaging: jar. Must be 'pom'.
 ---

 Key: HBASE-2682
 URL: https://issues.apache.org/jira/browse/HBASE-2682
 Project: HBase
  Issue Type: Bug
 Environment: Ubuntu 9
Reporter: Eugene Koontz

 Imran M Yousuf  writes: 
 (http://permalink.gmane.org/gmane.comp.java.hadoop.hbase.user/10525)
 I am trying to use HBase as maven dependency and am running in an
 error for 0.21-SNAPSHOT :(. I have attached the DEBUG maven output and
 Maven, Java versions and relevant POMs are as follows.
 See link for aforementioned maven output.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-1364) [performance] Distributed splitting of regionserver commit logs

2010-06-07 Thread Alex Newman (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876364#action_12876364
 ] 

Alex Newman commented on HBASE-1364:


So as I understand it, processServerShutdown always runs one split at a time 
due to locking issues. I would prefer if we were to relax this requirement that 
it be done in a separate jira. Any objections? What else should I do here to 
move things forward. I think the only thing that needs to be done is 

- Add some more tests (suggestions)
- switch all calls from splitLog to distributedSplitLog


 [performance] Distributed splitting of regionserver commit logs
 ---

 Key: HBASE-1364
 URL: https://issues.apache.org/jira/browse/HBASE-1364
 Project: HBase
  Issue Type: Improvement
Reporter: stack
Assignee: Alex Newman
Priority: Critical
 Fix For: 0.22.0

 Attachments: 1364-v2.patch, 1364.patch

  Time Spent: 8h
  Remaining Estimate: 0h

 HBASE-1008 has some improvements to our log splitting on regionserver crash; 
 but it needs to run even faster.
 (Below is from HBASE-1008)
 In bigtable paper, the split is distributed. If we're going to have 1000 
 logs, we need to distribute or at least multithread the splitting.
 1. As is, regions starting up expect to find one reconstruction log only. 
 Need to make it so pick up a bunch of edit logs and it should be fine that 
 logs are elsewhere in hdfs in an output directory written by all split 
 participants whether multithreaded or a mapreduce-like distributed process 
 (Lets write our distributed sort first as a MR so we learn whats involved; 
 distributed sort, as much as possible should use MR framework pieces). On 
 startup, regions go to this directory and pick up the files written by split 
 participants deleting and clearing the dir when all have been read in. Making 
 it so can take multiple logs for input, can also make the split process more 
 robust rather than current tenuous process which loses all edits if it 
 doesn't make it to the end without error.
 2. Each column family rereads the reconstruction log to find its edits. Need 
 to fix that. Split can sort the edits by column family so store only reads 
 its edits.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-1748) ClusterStatus needs to print out who has master role

2010-06-07 Thread Lars George (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars George updated HBASE-1748:
---

Attachment: HBASE-1748.patch

HBASE-1748.patch adds current master IP address to ClusterStatus and a request 
filter that for the master redirects from inactive master to the active one.

 ClusterStatus needs to print out who has master role
 

 Key: HBASE-1748
 URL: https://issues.apache.org/jira/browse/HBASE-1748
 Project: HBase
  Issue Type: Bug
Reporter: stack
Priority: Trivial
 Fix For: 0.21.0

 Attachments: HBASE-1748.patch


 Is in zk_dump but not in clusterstatus.
 You need it when you have 5 masters and you are trying to find the UI.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2618) Don't inherit from HConstants

2010-06-07 Thread HBase Review Board (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876383#action_12876383
 ] 

HBase Review Board commented on HBASE-2618:
---

Message from: Ryan Rawson ryano...@gmail.com


bq.  On 2010-06-06 09:08:19, Jonathan Gray wrote:
bq.   This patch changes a bunch of tabbing patterns to be different from what 
is currently done in the codebase.  Primarily the full indenting of arguments 
to align with each other.
bq.   
bq.   As far as I know, this format is not used anywhere else in HBase.  I 
don't think patches to clean up HConstants inheriting should do a partial 
change to a different style for something unrelated?
bq.  
bq.  Benoit Sigoure wrote:
bq.  As far as I can tell HBase doesn't seem to follow any consistent 
coding style, so I did what everyone else seems to be doing, I let my editor do 
the re-indenting the way I typically do.
bq.  I'll upload a new patch with the indentation change.
bq.  
bq.  Other than that, what do you think about the change?
bq.  
bq.  Jonathan Gray wrote:
bq.  It is mostly consistent.  In general the practice is to keep the same 
styling as the file you're in or if a new file, in whatever else is like it 
that's already there.
bq.  
bq.  Other than that, I'm a big +1 on removing the extending interface 
pattern we use on HConstants and your patch looks good to me on that front.
bq.  
bq.  Jonathan Gray wrote:
bq.  As far as the tests go, I've never actually seen that failing pattern. 
 I've been running unit tests on trunk in at least 4 different environments.
bq.  
bq.  I wonder where this is coming from... Is there a jira open for this 
specific failure?  If not, let's open one so we can get to the bottom of it.

I have noticed that HBase is mostly KR with a continuation indent not 
alignment.  It tends to match the IntelliJ default code style with tabs=2, 
continuation-indent=4.  Perhaps tweak your editor to only use align-braces on 
method parameters (as per IJ) and do continuation indent on all others.


- Ryan


---
This is an automatically generated e-mail. To reply, visit:
http://review.hbase.org/r/132/#review135
---





 Don't inherit from HConstants
 -

 Key: HBASE-2618
 URL: https://issues.apache.org/jira/browse/HBASE-2618
 Project: HBase
  Issue Type: Wish
Reporter: Benoit Sigoure
Assignee: Benoit Sigoure
Priority: Minor

 Can we stop using this idiom to inherit from HConstants?  This is a known bad 
 pattern and is recommended against in many places including Effective Java 
 (item 17).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HBASE-2683) Make it obvious in the documentation that ZooKeeper needs permanent storage

2010-06-07 Thread Jean-Daniel Cryans (JIRA)
Make it obvious in the documentation that ZooKeeper needs permanent storage
---

 Key: HBASE-2683
 URL: https://issues.apache.org/jira/browse/HBASE-2683
 Project: HBase
  Issue Type: Bug
Reporter: Jean-Daniel Cryans
 Fix For: 0.20.6, 0.21.0


If our users let HBase manage ZK, they probably won't bother combing through 
hbase-default.xml to figure that they need to set 
hbase.zookeeper.property.dataDir to something else than /tmp. It probably 
happened to deinspanjer in prod today and that's a show stopper.

The fix would be, at least, to improve the Getting Started documentation to 
include that configuration in the Fully-Distributed Operation section.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2681) Graceful fallback to NONE when the native compression algo is not found

2010-06-07 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876393#action_12876393
 ] 

Dave Latham commented on HBASE-2681:


It's definitely not good to fail silently at a later time.

However, I'd prefer to see it fail loudly and immediately if I try to use a 
form of compression that is not available rather than a quiet fallback to no 
compression that I would likely not notice.

 Graceful fallback to NONE when the native compression algo is not found
 ---

 Key: HBASE-2681
 URL: https://issues.apache.org/jira/browse/HBASE-2681
 Project: HBase
  Issue Type: Improvement
  Components: io
Reporter: Michele Catasta
Priority: Minor
 Fix For: 0.21.0

 Attachments: HBASE-2681.patch


 stack's word: Fellas have complained about the way broke lzo manifests 
 itself. HBase will actually take on writes. Its only when it goes to flush 
 that it drops the edits and in a way that is essentially hidden to the client 
 - exceptions are thrown in the regionserver log. So, i'd say, make another 
 issue if you don't mind but its not for you to fix, not unless you are 
 inclined. It'd be about better user experience around choosing a compression 
 that is not supported or not properly installed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2618) Don't inherit from HConstants

2010-06-07 Thread HBase Review Board (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876399#action_12876399
 ] 

HBase Review Board commented on HBASE-2618:
---

Message from: Benoit Sigoure tsuna...@gmail.com

---
This is an automatically generated e-mail. To reply, visit:
http://review.hbase.org/r/132/
---

(Updated 2010-06-07 13:26:26.467579)


Review request for hbase.


Changes
---

New version with coding style fixes and also HConstants becomes a public final 
class instead of an interface.


Summary
---

HBASE-2618 Don't inherit from HConstants.

Bonus: minor aesthetic / coding style clean ups and minor code changes.


This addresses bug HBASE-2618.


Diffs (updated)
-

  trunk/src/main/java/org/apache/hadoop/hbase/HConstants.java 951935 
  trunk/src/main/java/org/apache/hadoop/hbase/HMerge.java 951935 
  trunk/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java 951935 
  trunk/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java 
951935 
  trunk/src/main/java/org/apache/hadoop/hbase/client/MetaScanner.java 951935 
  trunk/src/main/java/org/apache/hadoop/hbase/master/BaseScanner.java 951935 
  trunk/src/main/java/org/apache/hadoop/hbase/master/HMaster.java 951935 
  trunk/src/main/java/org/apache/hadoop/hbase/master/RegionManager.java 951935 
  trunk/src/main/java/org/apache/hadoop/hbase/master/RegionServerOperation.java 
951935 
  trunk/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java 951935 
  trunk/src/main/java/org/apache/hadoop/hbase/master/TableOperation.java 951935 
  
trunk/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java
 951935 
  trunk/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java 951935 
  trunk/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java 
951935 
  trunk/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java 951935 
  trunk/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java 
951935 
  trunk/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLog.java 951935 
  trunk/src/main/java/org/apache/hadoop/hbase/zookeeper/HQuorumPeer.java 951935 
  trunk/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKServerTool.java 
951935 
  trunk/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperWrapper.java 
951935 
  trunk/src/test/java/org/apache/hadoop/hbase/MiniHBaseCluster.java 951935 
  trunk/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java 951935 
  
trunk/src/test/java/org/apache/hadoop/hbase/regionserver/TestExplicitColumnTracker.java
 951935 
  
trunk/src/test/java/org/apache/hadoop/hbase/regionserver/TestGetClosestAtOrBefore.java
 951935 
  
trunk/src/test/java/org/apache/hadoop/hbase/regionserver/TestGetDeleteTracker.java
 951935 
  
trunk/src/test/java/org/apache/hadoop/hbase/regionserver/TestKeyValueHeap.java 
951935 
  
trunk/src/test/java/org/apache/hadoop/hbase/regionserver/TestQueryMatcher.java 
951935 
  
trunk/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanDeleteTracker.java
 951935 
  
trunk/src/test/java/org/apache/hadoop/hbase/regionserver/TestWildcardColumnTracker.java
 951935 
  trunk/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestHLog.java 
951935 

Diff: http://review.hbase.org/r/132/diff


Testing
---

Code compiles.
Tests are acting up on my machine right now (many of them fail with a weird 
message [1] and Todd says he's been seeing similar failures for some time 
already, so I guess I'll try to run them again next week when the New Moon 
arrives).


[1] A number of tests fail with:
org.apache.hadoop.hbase.client.NoServerForRegionException: Timed out trying to 
locate root region because: Failed setting up proxy to /192.168.0.7:63773 after 
attempts=1
at 
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootRegion(HConnectionManager.java:1031)
Where, of course, 192.168.0.7 is my IP address.
Some of the tests that are acting up:
org.apache.hadoop.hbase.TestZooKeeper, 
org.apache.hadoop.hbase.regionserver.wal.TestLogRolling, 
org.apache.hadoop.hbase.rest.TestScannersWithFilters, 
org.apache.hadoop.hbase.master.TestMasterWrongRS, 
org.apache.hadoop.hbase.thrift.TestThriftServer, 
org.apache.hadoop.hbase.master.TestMasterTransitions, 
org.apache.hadoop.hbase.rest.TestStatusResource, 
org.apache.hadoop.hbase.client.TestFromClientSide, 
org.apache.hadoop.hbase.TestMultiParallelPut, 
org.apache.hadoop.hbase.master.TestRegionManager, 
org.apache.hadoop.hbase.mapreduce.TestTimeRangeMapRed


Thanks,

Benoit




 Don't inherit from HConstants
 -

 Key: HBASE-2618
 URL: https://issues.apache.org/jira/browse/HBASE-2618
 Project: HBase
  Issue Type: Wish
Reporter: 

[jira] Commented: (HBASE-2681) Graceful fallback to NONE when the native compression algo is not found

2010-06-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876405#action_12876405
 ] 

stack commented on HBASE-2681:
--

@Dave Currently, hbase runs and failure happens when we go to flush.  Failure 
is 'loud' if you are looking at logs but if not looking in logs, its not plain 
what is wrong.  You are suggesting Dave that rather than just proceed, that 
instead we fail even earlier?  Like when you try to create a table or alter a 
schema to add a compression that does not exist, that it fail at this time?  
Doing this latter is probalby the right thing to do but its kinda tough to do 
in that you need to ensure the compression is available on all members of the 
cluster inline w/ the setting of schema.  We don't have a mechanism to do this. 
  What do we do also when a new server is added to the cluster only the admins 
forgot to add compression libs should this new node fail loudly or just, 
log and keep going?

 Graceful fallback to NONE when the native compression algo is not found
 ---

 Key: HBASE-2681
 URL: https://issues.apache.org/jira/browse/HBASE-2681
 Project: HBase
  Issue Type: Improvement
  Components: io
Reporter: Michele Catasta
Priority: Minor
 Fix For: 0.21.0

 Attachments: HBASE-2681.patch


 stack's word: Fellas have complained about the way broke lzo manifests 
 itself. HBase will actually take on writes. Its only when it goes to flush 
 that it drops the edits and in a way that is essentially hidden to the client 
 - exceptions are thrown in the regionserver log. So, i'd say, make another 
 issue if you don't mind but its not for you to fix, not unless you are 
 inclined. It'd be about better user experience around choosing a compression 
 that is not supported or not properly installed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-1364) [performance] Distributed splitting of regionserver commit logs

2010-06-07 Thread ryan rawson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876415#action_12876415
 ] 

ryan rawson commented on HBASE-1364:


for this JIRA we should do the inline splitting, which means waiting on the 
distributed split to finish before going to the next ProcessServerShutdown item 
on the queue.

It isnt just as simple as saying 'just make it parallel' since there is an 
order to these TODO items and there are a number of different types of work 
items on the queue.  This should be addressed in the next JIRA.

 [performance] Distributed splitting of regionserver commit logs
 ---

 Key: HBASE-1364
 URL: https://issues.apache.org/jira/browse/HBASE-1364
 Project: HBase
  Issue Type: Improvement
Reporter: stack
Assignee: Alex Newman
Priority: Critical
 Fix For: 0.22.0

 Attachments: 1364-v2.patch, 1364.patch

  Time Spent: 8h
  Remaining Estimate: 0h

 HBASE-1008 has some improvements to our log splitting on regionserver crash; 
 but it needs to run even faster.
 (Below is from HBASE-1008)
 In bigtable paper, the split is distributed. If we're going to have 1000 
 logs, we need to distribute or at least multithread the splitting.
 1. As is, regions starting up expect to find one reconstruction log only. 
 Need to make it so pick up a bunch of edit logs and it should be fine that 
 logs are elsewhere in hdfs in an output directory written by all split 
 participants whether multithreaded or a mapreduce-like distributed process 
 (Lets write our distributed sort first as a MR so we learn whats involved; 
 distributed sort, as much as possible should use MR framework pieces). On 
 startup, regions go to this directory and pick up the files written by split 
 participants deleting and clearing the dir when all have been read in. Making 
 it so can take multiple logs for input, can also make the split process more 
 robust rather than current tenuous process which loses all edits if it 
 doesn't make it to the end without error.
 2. Each column family rereads the reconstruction log to find its edits. Need 
 to fix that. Split can sort the edits by column family so store only reads 
 its edits.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2681) Graceful fallback to NONE when the native compression algo is not found

2010-06-07 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876416#action_12876416
 ] 

Andrew Purtell commented on HBASE-2681:
---

I think it makes sense for the region server to refuse to open a region if the 
schema indicates to use an unavailable codec.


  



 Graceful fallback to NONE when the native compression algo is not found
 ---

 Key: HBASE-2681
 URL: https://issues.apache.org/jira/browse/HBASE-2681
 Project: HBase
  Issue Type: Improvement
  Components: io
Reporter: Michele Catasta
Priority: Minor
 Fix For: 0.21.0

 Attachments: HBASE-2681.patch


 stack's word: Fellas have complained about the way broke lzo manifests 
 itself. HBase will actually take on writes. Its only when it goes to flush 
 that it drops the edits and in a way that is essentially hidden to the client 
 - exceptions are thrown in the regionserver log. So, i'd say, make another 
 issue if you don't mind but its not for you to fix, not unless you are 
 inclined. It'd be about better user experience around choosing a compression 
 that is not supported or not properly installed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-1364) [performance] Distributed splitting of regionserver commit logs

2010-06-07 Thread Alex Newman (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876421#action_12876421
 ] 

Alex Newman commented on HBASE-1364:


Todd: not hard at all. I exposed it as a different api, but if we wish we could 
have it be conf driven. I could see scenarios where the same cluster could use 
different types of splitting for different things *shrug*

Ryan: I totally agree

Is their any reason why we are targeting 22.0 vs 21.0 for this patch?

 [performance] Distributed splitting of regionserver commit logs
 ---

 Key: HBASE-1364
 URL: https://issues.apache.org/jira/browse/HBASE-1364
 Project: HBase
  Issue Type: Improvement
Reporter: stack
Assignee: Alex Newman
Priority: Critical
 Fix For: 0.22.0

 Attachments: 1364-v2.patch, 1364.patch

  Time Spent: 8h
  Remaining Estimate: 0h

 HBASE-1008 has some improvements to our log splitting on regionserver crash; 
 but it needs to run even faster.
 (Below is from HBASE-1008)
 In bigtable paper, the split is distributed. If we're going to have 1000 
 logs, we need to distribute or at least multithread the splitting.
 1. As is, regions starting up expect to find one reconstruction log only. 
 Need to make it so pick up a bunch of edit logs and it should be fine that 
 logs are elsewhere in hdfs in an output directory written by all split 
 participants whether multithreaded or a mapreduce-like distributed process 
 (Lets write our distributed sort first as a MR so we learn whats involved; 
 distributed sort, as much as possible should use MR framework pieces). On 
 startup, regions go to this directory and pick up the files written by split 
 participants deleting and clearing the dir when all have been read in. Making 
 it so can take multiple logs for input, can also make the split process more 
 robust rather than current tenuous process which loses all edits if it 
 doesn't make it to the end without error.
 2. Each column family rereads the reconstruction log to find its edits. Need 
 to fix that. Split can sort the edits by column family so store only reads 
 its edits.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2468) Improvements to prewarm META cache on clients

2010-06-07 Thread HBase Review Board (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876428#action_12876428
 ] 

HBase Review Board commented on HBASE-2468:
---

Message from: st...@duboce.net

---
This is an automatically generated e-mail. To reply, visit:
http://review.hbase.org/r/98/#review144
---

Ship it!


I think this good to go.  Seem my comments below.  See what you think.  My one 
concern is the number of calls to getRowOrBefore... hopefully this patch cuts 
down overall on our need to use this function.  I'd like to hear your opinion 
on that.


src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
http://review.hbase.org/r/98/#comment744

This is code duplicated from elsewhere.  Can I help make it so we don't 
have to do this duplication?  Or, for now, since this your fist patch, we can 
put it off IF you file a JIRA to fix the duplication (smile).



src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
http://review.hbase.org/r/98/#comment745

So we start scanning at 'row'?  Is this the 'row' the user asked for? No, 
it needs to be the row in the .META. table, right?  We need to find the row in 
.META. that contains the asked for row first?  NM, I see below how the row here 
is made.. .this looks right.



src/main/java/org/apache/hadoop/hbase/client/HTable.java
http://review.hbase.org/r/98/#comment746

This is a nice little facility.



src/main/java/org/apache/hadoop/hbase/client/MetaScanner.java
http://review.hbase.org/r/98/#comment747

OK.  This looks right.



src/main/java/org/apache/hadoop/hbase/client/MetaScanner.java
http://review.hbase.org/r/98/#comment748

getRowOrBefore is an expensive call.  Are we sure we are not calling this 
too often?


- stack





 Improvements to prewarm META cache on clients
 -

 Key: HBASE-2468
 URL: https://issues.apache.org/jira/browse/HBASE-2468
 Project: HBase
  Issue Type: Improvement
  Components: client
Reporter: Todd Lipcon
Assignee: Mingjie Lai
 Fix For: 0.21.0

 Attachments: HBASE-2468-trunk.patch


 A couple different use cases cause storms of reads to META during startup. 
 For example, a large MR job will cause each map task to hit meta since it 
 starts with an empty cache.
 A couple possible improvements have been proposed:
  - MR jobs could ship a copy of META for the table in the DistributedCache
  - Clients could prewarm cache by doing a large scan of all the meta for the 
 table instead of random reads for each miss
  - Each miss could fetch ahead some number of rows in META

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2405) Close, split, open of regions in RegionServer are run by a single thread only.

2010-06-07 Thread Alex Newman (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876462#action_12876462
 ] 

Alex Newman commented on HBASE-2405:


Does distributed log splitting change what we should do here?

 Close, split, open of regions in RegionServer are run by a single thread only.
 --

 Key: HBASE-2405
 URL: https://issues.apache.org/jira/browse/HBASE-2405
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
Priority: Critical
 Fix For: 0.21.0

 Attachments: nsre.txt


 JGray and Karthik observed yesterday that a regoin open message arrived at 
 the regionserver but that the regionserver worker thread did not get around 
 to the actually opening until 45 seconds later (region offline for 45 
 seconds).  We only run a single Worker thread in a regoinserver processing 
 open, close, and splits.  In this case, a long running close (or two) held up 
 the worker thread.  We need to run more than a single worker.  A pool of 
 workers?  Should opens be prioritized?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HBASE-2685) Add activeMaster field to AClusterStatus record in Avro interface

2010-06-07 Thread Jeff Hammerbacher (JIRA)
Add activeMaster field to AClusterStatus record in Avro interface
-

 Key: HBASE-2685
 URL: https://issues.apache.org/jira/browse/HBASE-2685
 Project: HBase
  Issue Type: Improvement
  Components: avro
Reporter: Jeff Hammerbacher




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HBASE-2686) LogRollListener is not a Listener, rename

2010-06-07 Thread stack (JIRA)
LogRollListener is not a Listener, rename
-

 Key: HBASE-2686
 URL: https://issues.apache.org/jira/browse/HBASE-2686
 Project: HBase
  Issue Type: Bug
Reporter: stack


J-D just pointed out the fact that LogRollListener Interface cannot listen.  It 
has a single method named logRollRequested.   Lets fix.  Rename the interface 
LogRoll (and the method name while we're at it... its past tense but returns a 
void?)

Lets fix this stuff.  It makes the code base hard to grok.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2238) Review all transitions -- compactions, splits, region opens, log splitting -- for crash-proofyness and atomicity

2010-06-07 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876485#action_12876485
 ] 

Jean-Daniel Cryans commented on HBASE-2238:
---

I ran into a weird situation today running TestReplication (from HBASE-2223's 
latest patch up on rb), the test kills a region server by expiring its session 
and then the following happened (almost all at the same time):

 # Master lists all hlogs to split (total of 12)
 # RS does a log roll
 # RS tries to register the new log in ZK for replication and fails because the 
session was expired, but the log is already rolled
 # RS takes 3 more edits into the new log
 # RS cleans 6 logs over 13
 # Master fails at splitting the 3rd log it listed, delays the log splitting 
process
 # Master tries again to split logs, lists 7 of them and is successful

In the end, the master wasn't missing any edits (because log splitting failed 
and got the new log the second time) but the slave cluster was missing 3. This 
makes me think that the region server should also do a better job at handling 
KeeperException.SessionExpiredException because we currently don't do it at 
all. 

 Review all transitions -- compactions, splits, region opens, log splitting -- 
 for crash-proofyness and atomicity
 

 Key: HBASE-2238
 URL: https://issues.apache.org/jira/browse/HBASE-2238
 Project: HBase
  Issue Type: Bug
Reporter: stack

 This issue is about reviewing state transitions in hbase to ensure we're 
 sufficently hardened against crashes.  This issue I see as an umbrella issue 
 under which we'd look at compactions, splits, log splits, region opens -- 
 what else is there?  We'd look at each in turn to see how we survive crash at 
 any time during the transition.  For example, we think compactions idempotent 
 but we need to prove it so.  Splits are for sure not, not at the moment 
 (Witness disabled parents but daughters missing or only one of them 
 available).
 Part of this issue would be writing tests that aim to break transitions.
 In light of above, here is recent off-list note from Todd Lipcon (and 
 another):
 {code}
 I thought a bit more last night about the discussion we were having
 regarding various HBase components doing operations on the HDFS data,
 and ensuring that in various racy scenarios that we don't have two
 region servers or masters overlapping.
 I came to the conclusion that ZK data can't be used to actually have
 effective locks on HDFS directories, since we can never know that we
 still have a ZK lock when we do an operation. Thus the operations
 themselves have to be idempotent, or recoverable in the case of
 multiple nodes trying to do the same thing. Or, we have to use HDFS
 itself as a locking mechanism - this is what we discussed using write
 leases essentially as locks.
 Since I didn't really trust myself, I ran my thoughts by Another
 and he concurs (see
 below). Figured this is food for thought for designing HBase data
 management to be completely safe/correct.
 ...
 -- Forwarded message --
 From: Another anot...@xx.com
 Date: Wed, Feb 17, 2010 at 10:50 AM
 Subject: locks
 To: Todd Lipcon t...@xxx.com
 Short answer is no, you're right.
 Because HDFS and ZK are partitioned (in the sense that there's no
 communication between them) and there may be an unknown delay between
 acquiring the lock and performing the operation on HDFS you have no
 way of knowing that you still own the lock, like you say.
 If the lock cannot be revoked while you have it (no timeouts) then you
 can atomically check that you still have the lock and do the operation
 on HDFS, because checking is a no-op. Designing a system with no lock
 revocation in the face of failures is an exercise for the reader :)
 The right way is for HDFS and ZK to communicate to construct an atomic
 operation. ZK could give a token to the client which it also gives to
 HDFS, and HDFS uses that token to do admission control. There's
 probably some neat theorem about causality and the impossibility of
 doing distributed locking without a sufficiently strong atomic
 primitive here.
 Another
 {code}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2616) TestHRegion.testWritesWhileGetting flaky on trunk

2010-06-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876488#action_12876488
 ] 

stack commented on HBASE-2616:
--

This test runs for 2minutes and 30seconds up on hudson.  Does it have to?  The 
testWritesWhileScanning  runs 30 seconds.

 TestHRegion.testWritesWhileGetting flaky on trunk
 -

 Key: HBASE-2616
 URL: https://issues.apache.org/jira/browse/HBASE-2616
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Todd Lipcon
Assignee: ryan rawson
Priority: Critical
 Fix For: 0.20.5

 Attachments: HBASE-2616.patch


 Saw this failure on my internal hudson:
 junit.framework.AssertionFailedError: expected:\x00\x00\x00\x96 but 
 was:\x00\x00\x01\x00
   at 
 org.apache.hadoop.hbase.HBaseTestCase.assertEquals(HBaseTestCase.java:684)
   at 
 org.apache.hadoop.hbase.regionserver.TestHRegion.testWritesWhileGetting(TestHRegion.java:2334)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2616) TestHRegion.testWritesWhileGetting flaky on trunk

2010-06-07 Thread ryan rawson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876491#action_12876491
 ] 

ryan rawson commented on HBASE-2616:


the test should not take that long, but it is slow ... seems like the
get() calls are strangely slow, i think because of the nature of
piling multiple version on top of older versions.  maybe some
improvements in the get/scan calls in the single thick row case would
make this faster.




 TestHRegion.testWritesWhileGetting flaky on trunk
 -

 Key: HBASE-2616
 URL: https://issues.apache.org/jira/browse/HBASE-2616
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Todd Lipcon
Assignee: ryan rawson
Priority: Critical
 Fix For: 0.20.5

 Attachments: HBASE-2616.patch


 Saw this failure on my internal hudson:
 junit.framework.AssertionFailedError: expected:\x00\x00\x00\x96 but 
 was:\x00\x00\x01\x00
   at 
 org.apache.hadoop.hbase.HBaseTestCase.assertEquals(HBaseTestCase.java:684)
   at 
 org.apache.hadoop.hbase.regionserver.TestHRegion.testWritesWhileGetting(TestHRegion.java:2334)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2400) new connector for Avro RPC access to HBase cluster

2010-06-07 Thread HBase Review Board (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876493#action_12876493
 ] 

HBase Review Board commented on HBASE-2400:
---

Message from: Andrew Purtell apurt...@apache.org


bq.  On 2010-06-06 10:23:33, Andrew Purtell wrote:
bq.   trunk/src/main/java/org/apache/hadoop/hbase/avro/hbase.genavro, line 92
bq.   http://review.hbase.org/r/128/diff/1/?file=979#file979line92
bq.  
bq.   How do you deal with user attributes? A column can have an arbitrary 
set of them. Coprocessors will use this facility. Not necessary to support 
attribute access via fields of the descriptors if there are RPC methods 
available to read or write them.
bq.  
bq.  Jeff Hammerbacher wrote:
bq.  https://issues.apache.org/jira/browse/HBASE-2688
bq.  
bq.  I genuinely did not notice that these existed. I'll get a subsequent 
patch out once I clean this one up and it goes into trunk.

Sounds good. HTD and HCD can have attributes of arbitrary byte[].


- Andrew


---
This is an automatically generated e-mail. To reply, visit:
http://review.hbase.org/r/128/#review138
---





 new connector for Avro RPC access to HBase cluster
 --

 Key: HBASE-2400
 URL: https://issues.apache.org/jira/browse/HBASE-2400
 Project: HBase
  Issue Type: Task
  Components: avro
Reporter: Andrew Purtell
Priority: Minor
 Attachments: HBASE-2400-v0.patch


 Build a new connector contrib architecturally equivalent to the Thrift 
 connector, but using Avro serialization and associated transport and RPC 
 server work. Support AAA (audit, authentication, authorization). 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HBASE-2691) LeaseStillHeldException totally ignored by RS, wrongly named

2010-06-07 Thread Jean-Daniel Cryans (JIRA)
LeaseStillHeldException totally ignored by RS, wrongly named


 Key: HBASE-2691
 URL: https://issues.apache.org/jira/browse/HBASE-2691
 Project: HBase
  Issue Type: Bug
Reporter: Jean-Daniel Cryans
Assignee: Jean-Daniel Cryans
 Fix For: 0.20.6, 0.21.0


Currently region servers don't handle 
org.apache.hadoop.hbase.Leases$LeaseStillHeldException in any way that's useful 
so what happens right now is that it tries to report to the master and this 
happens:

{code}

2010-06-07 17:20:54,368 WARN  [RegionServer:0] regionserver.HRegionServer(553): 
Attempt=1
org.apache.hadoop.hbase.Leases$LeaseStillHeldException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at 
org.apache.hadoop.hbase.RemoteExceptionHandler.decodeRemoteException(RemoteExceptionHandler.java:94)
at 
org.apache.hadoop.hbase.RemoteExceptionHandler.checkThrowable(RemoteExceptionHandler.java:48)
at 
org.apache.hadoop.hbase.RemoteExceptionHandler.checkIOException(RemoteExceptionHandler.java:66)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:541)
at 
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:173)
at java.lang.Thread.run(Thread.java:637)
{code}

Then it will retry until the watch is triggered telling it that the session's 
expired! Instead, we should be a lot more proactive initiate abort procedure.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2673) Investigate consistency of intra-row scans

2010-06-07 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876524#action_12876524
 ] 

Todd Lipcon commented on HBASE-2673:


I thought about this a bit tonight. I think this is essentially impossible to 
implement unless we do the following:
- Add logical timestamps to HFiles (a few bytes per KV if we use vints and 
relative to an hfile-wide meta entry)
- Add to the scanner API so that each scan result object also returns the 
current logical timestamp
- Add logical timestamps to HLog entries so that a server that replays the 
edits maintains the same logical timestamps of each row.

I think these are all needed in order to maintain consistency in the face of 
failure or through a flush operation.

Rather than do all of the above, I think we should simply document in 
Scanner.setBatch that using intra-row scanning loses the consistency guarantee. 
Also we'll want to augment the acid guarantees doc to state this.

 Investigate consistency of intra-row scans
 --

 Key: HBASE-2673
 URL: https://issues.apache.org/jira/browse/HBASE-2673
 Project: HBase
  Issue Type: Task
  Components: documentation, regionserver
Affects Versions: 0.21.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon

 I have an intuition that intra-row scanning probably does not provide a 
 consistent view of the row. We should investigate how true this is, and 
 document what the interaction of the feature is with the guarantee, etc.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2670) Reader atomicity broken in trunk

2010-06-07 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876526#action_12876526
 ] 

Todd Lipcon commented on HBASE-2670:


The issue is that the memstore timestamps are lost once we flush the memstore 
to an HFile, and we immediately change over open scanners to scan from the 
HFile in updateReaders()

I think the fix is one of the following:
- in updateReaders, scan ahead the memstore reader until the next row, and 
cache those KVs internally. Then when we hit the end of the cache, do the 
actual reseek in the HFile at the begining of the next row.
- in updateReaders, simply mark a flag that we need to update as soon as we hit 
the next row. Then do the reseek lazily in next()

 Reader atomicity broken in trunk
 

 Key: HBASE-2670
 URL: https://issues.apache.org/jira/browse/HBASE-2670
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.21.0
Reporter: Todd Lipcon
Assignee: ryan rawson
Priority: Blocker

 There appears to be a bug in HBASE-2248 as committed to trunk. See following 
 failing test:
 http://hudson.zones.apache.org/hudson/job/HBase-TRUNK/1296/testReport/junit/org.apache.hadoop.hbase/TestAcidGuarantees/testAtomicity/
 Think this is the same bug we saw early on in 2248 in the 0.20 branch, looks 
 like the fix didn't make it over.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (HBASE-2670) Reader atomicity broken in trunk

2010-06-07 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon reassigned HBASE-2670:
--

Assignee: Todd Lipcon  (was: ryan rawson)

 Reader atomicity broken in trunk
 

 Key: HBASE-2670
 URL: https://issues.apache.org/jira/browse/HBASE-2670
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.21.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Blocker

 There appears to be a bug in HBASE-2248 as committed to trunk. See following 
 failing test:
 http://hudson.zones.apache.org/hudson/job/HBase-TRUNK/1296/testReport/junit/org.apache.hadoop.hbase/TestAcidGuarantees/testAtomicity/
 Think this is the same bug we saw early on in 2248 in the 0.20 branch, looks 
 like the fix didn't make it over.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2670) Reader atomicity broken in trunk

2010-06-07 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876531#action_12876531
 ] 

Todd Lipcon commented on HBASE-2670:


Actually, I misunderstood this a little bit. The above describes the issue when 
using intra-row scanning (see HBASE-2673). Without intra-row scanning, since 
updateReaders is synchronized, it should only be called when the StoreScanner 
is between rows.

So I think the issue may be how updateReaders itself works. It uses peak() on 
the heap to find the next row it's going to, and then seeks to that one. 
updateReaders, though, is called by a different thread with a different 
readpoint set. So that seek pulls in values for the next row that are different 
from what will be read.

I'll try to make a patch for this tomorrow.

 Reader atomicity broken in trunk
 

 Key: HBASE-2670
 URL: https://issues.apache.org/jira/browse/HBASE-2670
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.21.0
Reporter: Todd Lipcon
Assignee: ryan rawson
Priority: Blocker

 There appears to be a bug in HBASE-2248 as committed to trunk. See following 
 failing test:
 http://hudson.zones.apache.org/hudson/job/HBase-TRUNK/1296/testReport/junit/org.apache.hadoop.hbase/TestAcidGuarantees/testAtomicity/
 Think this is the same bug we saw early on in 2248 in the 0.20 branch, looks 
 like the fix didn't make it over.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2670) Reader atomicity broken in trunk

2010-06-07 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876533#action_12876533
 ] 

Todd Lipcon commented on HBASE-2670:


How about you post a patch for review instead of just committing? This stuff is 
bug prone, we should do code reviews.

 Reader atomicity broken in trunk
 

 Key: HBASE-2670
 URL: https://issues.apache.org/jira/browse/HBASE-2670
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.21.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Blocker

 There appears to be a bug in HBASE-2248 as committed to trunk. See following 
 failing test:
 http://hudson.zones.apache.org/hudson/job/HBase-TRUNK/1296/testReport/junit/org.apache.hadoop.hbase/TestAcidGuarantees/testAtomicity/
 Think this is the same bug we saw early on in 2248 in the 0.20 branch, looks 
 like the fix didn't make it over.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2691) LeaseStillHeldException totally ignored by RS, wrongly named

2010-06-07 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876542#action_12876542
 ] 

Jean-Daniel Cryans commented on HBASE-2691:
---

The RS's session's expired, it reports back to the master right after that 
(it's marked dead in the master) and trips into:

{code}

  private void checkIsDead(final String serverName, final String what)
  throws LeaseStillHeldException {
if (!isDead(serverName)) return;
LOG.debug(Server  + what +  rejected; currently processing  +
  serverName +  as dead server);
throw new Leases.LeaseStillHeldException(serverName);
  }
{code}

Which I see in the log. then on the HRS side this falls into:

{code}

  } catch (Exception e) { // FindBugs REC_CATCH_EXCEPTION
if (e instanceof IOException) {
  e = RemoteExceptionHandler.checkIOException((IOException) e);
}
tries++;
if (tries  0  (tries % this.numRetries) == 0) {
  // Check filesystem every so often.
  checkFileSystem();
}
if (this.stopRequested.get()) {
  LOG.info(Stop requested, clearing toDo despite exception);
  toDo.clear();
  continue;
}
  LOG.warn(Attempt= + tries, e);
// No point retrying immediately; this is probably connection to
// master issue.  Doing below will cause us to sleep.
lastMsg = System.currentTimeMillis();
{code}

Which throws the stack trace I pasted in this jira's description. IMO, and 
taking into account the last comment in that code, we shouldn't retry. Instead, 
we should catch LeaseStillHeldException separately from this big 
catch(Exception) and treat it as an emergency shut down.

 LeaseStillHeldException totally ignored by RS, wrongly named
 

 Key: HBASE-2691
 URL: https://issues.apache.org/jira/browse/HBASE-2691
 Project: HBase
  Issue Type: Bug
Reporter: Jean-Daniel Cryans
Assignee: Jean-Daniel Cryans
 Fix For: 0.20.6, 0.21.0


 Currently region servers don't handle 
 org.apache.hadoop.hbase.Leases$LeaseStillHeldException in any way that's 
 useful so what happens right now is that it tries to report to the master and 
 this happens:
 {code}
 2010-06-07 17:20:54,368 WARN  [RegionServer:0] 
 regionserver.HRegionServer(553): Attempt=1
 org.apache.hadoop.hbase.Leases$LeaseStillHeldException
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.hadoop.hbase.RemoteExceptionHandler.decodeRemoteException(RemoteExceptionHandler.java:94)
 at 
 org.apache.hadoop.hbase.RemoteExceptionHandler.checkThrowable(RemoteExceptionHandler.java:48)
 at 
 org.apache.hadoop.hbase.RemoteExceptionHandler.checkIOException(RemoteExceptionHandler.java:66)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:541)
 at 
 org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:173)
 at java.lang.Thread.run(Thread.java:637)
 {code}
 Then it will retry until the watch is triggered telling it that the session's 
 expired! Instead, we should be a lot more proactive initiate abort procedure.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2691) LeaseStillHeldException totally ignored by RS, wrongly named

2010-06-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876553#action_12876553
 ] 

stack commented on HBASE-2691:
--

How you going to tell difference between LeaseStillHeldException thrown when 
we're processing shutdown of a RS that was on the same host and port as this 
RS? (The scenario is the RS fails and is restarted quickly, so fast, it checks 
in at master before master even knows it dead).

 LeaseStillHeldException totally ignored by RS, wrongly named
 

 Key: HBASE-2691
 URL: https://issues.apache.org/jira/browse/HBASE-2691
 Project: HBase
  Issue Type: Bug
Reporter: Jean-Daniel Cryans
Assignee: Jean-Daniel Cryans
 Fix For: 0.20.6, 0.21.0


 Currently region servers don't handle 
 org.apache.hadoop.hbase.Leases$LeaseStillHeldException in any way that's 
 useful so what happens right now is that it tries to report to the master and 
 this happens:
 {code}
 2010-06-07 17:20:54,368 WARN  [RegionServer:0] 
 regionserver.HRegionServer(553): Attempt=1
 org.apache.hadoop.hbase.Leases$LeaseStillHeldException
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.hadoop.hbase.RemoteExceptionHandler.decodeRemoteException(RemoteExceptionHandler.java:94)
 at 
 org.apache.hadoop.hbase.RemoteExceptionHandler.checkThrowable(RemoteExceptionHandler.java:48)
 at 
 org.apache.hadoop.hbase.RemoteExceptionHandler.checkIOException(RemoteExceptionHandler.java:66)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:541)
 at 
 org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:173)
 at java.lang.Thread.run(Thread.java:637)
 {code}
 Then it will retry until the watch is triggered telling it that the session's 
 expired! Instead, we should be a lot more proactive initiate abort procedure.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.