[jira] [Updated] (HBASE-6711) Avoid local results copy in StoreScanner

2012-09-03 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-6711:
-

Attachment: 6711-0.96-v1.txt

> Avoid local results copy in StoreScanner
> 
>
> Key: HBASE-6711
> URL: https://issues.apache.org/jira/browse/HBASE-6711
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Minor
> Fix For: 0.96.0, 0.94.2
>
> Attachments: 6711-0.96-v1.txt, 6711-0.96-v1.txt, 6711-0.96-v1.txt
>
>
> In StoreScanner the number of results is limited to avoid OOMs.
> However, this is done by first adding the KV into a local ArrayList and then 
> copying the entries in this list to the final result list.
> In turns out the this temporary list is only used to keep track of the size 
> of the result set in this loop. Can use a simple int instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6698) Refactor checkAndPut and checkAndDelete to use doMiniBatchMutation

2012-09-03 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13447491#comment-13447491
 ] 

ramkrishna.s.vasudevan commented on HBASE-6698:
---

One thing i can find here is we need to make use of prePut hook which after the 
refactoring will not be used in checkAndMutate case.  
So will be it be ok to replace all internalPut() to batchMutate()?  

> Refactor checkAndPut and checkAndDelete to use doMiniBatchMutation
> --
>
> Key: HBASE-6698
> URL: https://issues.apache.org/jira/browse/HBASE-6698
> Project: HBase
>  Issue Type: Improvement
>Reporter: ramkrishna.s.vasudevan
> Fix For: 0.96.0
>
> Attachments: HBASE-6698_1.patch, HBASE-6698.patch
>
>
> Currently the checkAndPut and checkAndDelete api internally calls the 
> internalPut and internalDelete.  May be we can just call doMiniBatchMutation
> only.  This will help in future like if we have some hooks and the CP
> handles certain cases in the doMiniBatchMutation the same can be done while
> doing a put thro checkAndPut or while doing a delete thro checkAndDelete.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6698) Refactor checkAndPut and checkAndDelete to use doMiniBatchMutation

2012-09-03 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13447484#comment-13447484
 ] 

Ted Yu commented on HBASE-6698:
---

internalPut() is called by put(). So its use is widespread.

Since this JIRA aims to facilitate coprocessor hooks, I think we should examine 
the usage carefully.

BTW, Hadoop QA is temporarily out of order because of non-zero return value 
from mvn command.

> Refactor checkAndPut and checkAndDelete to use doMiniBatchMutation
> --
>
> Key: HBASE-6698
> URL: https://issues.apache.org/jira/browse/HBASE-6698
> Project: HBase
>  Issue Type: Improvement
>Reporter: ramkrishna.s.vasudevan
> Fix For: 0.96.0
>
> Attachments: HBASE-6698_1.patch, HBASE-6698.patch
>
>
> Currently the checkAndPut and checkAndDelete api internally calls the 
> internalPut and internalDelete.  May be we can just call doMiniBatchMutation
> only.  This will help in future like if we have some hooks and the CP
> handles certain cases in the doMiniBatchMutation the same can be done while
> doing a put thro checkAndPut or while doing a delete thro checkAndDelete.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6698) Refactor checkAndPut and checkAndDelete to use doMiniBatchMutation

2012-09-03 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13447481#comment-13447481
 ] 

ramkrishna.s.vasudevan commented on HBASE-6698:
---

@Ted
We did not remove internalPut and internalDelete because mostly they were 
getting used in HBaseFSck and in Merge tools.
Others were in testcases only.
What do you feel Ted?  


> Refactor checkAndPut and checkAndDelete to use doMiniBatchMutation
> --
>
> Key: HBASE-6698
> URL: https://issues.apache.org/jira/browse/HBASE-6698
> Project: HBase
>  Issue Type: Improvement
>Reporter: ramkrishna.s.vasudevan
> Fix For: 0.96.0
>
> Attachments: HBASE-6698_1.patch, HBASE-6698.patch
>
>
> Currently the checkAndPut and checkAndDelete api internally calls the 
> internalPut and internalDelete.  May be we can just call doMiniBatchMutation
> only.  This will help in future like if we have some hooks and the CP
> handles certain cases in the doMiniBatchMutation the same can be done while
> doing a put thro checkAndPut or while doing a delete thro checkAndDelete.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6698) Refactor checkAndPut and checkAndDelete to use doMiniBatchMutation

2012-09-03 Thread Priyadarshini (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Priyadarshini updated HBASE-6698:
-

Status: Patch Available  (was: Open)

> Refactor checkAndPut and checkAndDelete to use doMiniBatchMutation
> --
>
> Key: HBASE-6698
> URL: https://issues.apache.org/jira/browse/HBASE-6698
> Project: HBase
>  Issue Type: Improvement
>Reporter: ramkrishna.s.vasudevan
> Fix For: 0.96.0
>
> Attachments: HBASE-6698_1.patch, HBASE-6698.patch
>
>
> Currently the checkAndPut and checkAndDelete api internally calls the 
> internalPut and internalDelete.  May be we can just call doMiniBatchMutation
> only.  This will help in future like if we have some hooks and the CP
> handles certain cases in the doMiniBatchMutation the same can be done while
> doing a put thro checkAndPut or while doing a delete thro checkAndDelete.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6698) Refactor checkAndPut and checkAndDelete to use doMiniBatchMutation

2012-09-03 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-6698:
--

Status: Open  (was: Patch Available)

> Refactor checkAndPut and checkAndDelete to use doMiniBatchMutation
> --
>
> Key: HBASE-6698
> URL: https://issues.apache.org/jira/browse/HBASE-6698
> Project: HBase
>  Issue Type: Improvement
>Reporter: ramkrishna.s.vasudevan
> Fix For: 0.96.0
>
> Attachments: HBASE-6698_1.patch, HBASE-6698.patch
>
>
> Currently the checkAndPut and checkAndDelete api internally calls the 
> internalPut and internalDelete.  May be we can just call doMiniBatchMutation
> only.  This will help in future like if we have some hooks and the CP
> handles certain cases in the doMiniBatchMutation the same can be done while
> doing a put thro checkAndPut or while doing a delete thro checkAndDelete.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6698) Refactor checkAndPut and checkAndDelete to use doMiniBatchMutation

2012-09-03 Thread Priyadarshini (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Priyadarshini updated HBASE-6698:
-

Attachment: HBASE-6698_1.patch

> Refactor checkAndPut and checkAndDelete to use doMiniBatchMutation
> --
>
> Key: HBASE-6698
> URL: https://issues.apache.org/jira/browse/HBASE-6698
> Project: HBase
>  Issue Type: Improvement
>Reporter: ramkrishna.s.vasudevan
> Fix For: 0.96.0
>
> Attachments: HBASE-6698_1.patch, HBASE-6698.patch
>
>
> Currently the checkAndPut and checkAndDelete api internally calls the 
> internalPut and internalDelete.  May be we can just call doMiniBatchMutation
> only.  This will help in future like if we have some hooks and the CP
> handles certain cases in the doMiniBatchMutation the same can be done while
> doing a put thro checkAndPut or while doing a delete thro checkAndDelete.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6713) Stopping meta/root RS may take 50mins when it is in region-splitting

2012-09-03 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13447463#comment-13447463
 ] 

chunhui shen commented on HBASE-6713:
-

bq.Why this change:{code}-if (!r.getRegionInfo().isMetaRegion() && 
r.isAvailable()) {
+if (!r.getRegionInfo().isMetaTable() && r.isAvailable()) {{code}

MetaTable means -ROOT- or .META.
MetaRegion only means .META.

HRegionserver#closeUserRegions means closing all the user regions, but it will 
also  close -ROOT- without the patch, I think it's a mistake
In the patch,I try to close MetaTable regions after the compact/split thread 
exited, so closeUserRegions shouldn't close -ROOT- region

> Stopping meta/root RS may take 50mins when it is in region-splitting
> 
>
> Key: HBASE-6713
> URL: https://issues.apache.org/jira/browse/HBASE-6713
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.94.1
>Reporter: chunhui shen
>Assignee: chunhui shen
> Fix For: 0.94.3
>
> Attachments: HBASE-6713.patch
>
>
> When we stop the RS carrying ROOT/META, if it is in the splitting for some 
> region, the whole stopping process may take 50 mins.
> The reason is :
> 1.ROOT/META region is closed when stopping the regionserver
> 2.The Split Transaction failed updating META and it will retry
> 3.The retry num is 100, and the total time is about 50 mins as default;
> This configuration is set by 
> HConnectionManager#setServerSideHConnectionRetries
> I think 50 mins is too long to acceptable, my suggested solution is closing 
> MetaTable regions after the compact/split thread is closed

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6671) Kerberos authenticated super user should be able to retrieve proxied delegation tokens

2012-09-03 Thread Himanshu Vashishtha (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13447450#comment-13447450
 ] 

Himanshu Vashishtha commented on HBASE-6671:


EDIT:
If so, then a user Joe who is not kerberos authenticated can access hbase 
services by piggybacking on hbase credentials?

> Kerberos authenticated super user should be able to retrieve proxied 
> delegation tokens
> --
>
> Key: HBASE-6671
> URL: https://issues.apache.org/jira/browse/HBASE-6671
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.1
>Reporter: Francis Liu
>Assignee: Francis Liu
> Fix For: 0.96.0, 0.94.2
>
> Attachments: 6671-trunk-v2.txt, proxy_fix_94.patch, 
> proxy_fix_94.patch, proxy_fix_trunk.patch
>
>
> There a services such a oozie which perform actions in behalf of the user 
> using proxy authentication. Retrieving delegation tokens should support this 
> behavior. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6671) Kerberos authenticated super user should be able to retrieve proxied delegation tokens

2012-09-03 Thread Himanshu Vashishtha (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13447449#comment-13447449
 ] 

Himanshu Vashishtha commented on HBASE-6671:


Sorry for chiming in late, but I want to understand what is going on here. I 
was reading HBase security code and the attached patch. A basic question:
Do we support proxy users? If so, then a use Joe who is not kerberos 
authenticated  can access hbase services by piggybacking on hbase credentials. 
How one can enable/use it? Please share.




> Kerberos authenticated super user should be able to retrieve proxied 
> delegation tokens
> --
>
> Key: HBASE-6671
> URL: https://issues.apache.org/jira/browse/HBASE-6671
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.1
>Reporter: Francis Liu
>Assignee: Francis Liu
> Fix For: 0.96.0, 0.94.2
>
> Attachments: 6671-trunk-v2.txt, proxy_fix_94.patch, 
> proxy_fix_94.patch, proxy_fix_trunk.patch
>
>
> There a services such a oozie which perform actions in behalf of the user 
> using proxy authentication. Retrieving delegation tokens should support this 
> behavior. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-4676) Prefix Compression - Trie data block encoding

2012-09-03 Thread Matt Corgan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13447427#comment-13447427
 ] 

Matt Corgan commented on HBASE-4676:


ReviewBoard links:

hbase-common: https://reviews.apache.org/r/6897/
hbase-prefix-tree: https://reviews.apache.org/r/6898/

excludes tests for now

> Prefix Compression - Trie data block encoding
> -
>
> Key: HBASE-4676
> URL: https://issues.apache.org/jira/browse/HBASE-4676
> Project: HBase
>  Issue Type: New Feature
>  Components: io, performance, regionserver
>Affects Versions: 0.90.6
>Reporter: Matt Corgan
>Assignee: Matt Corgan
> Attachments: HBASE-4676-0.94-v1.patch, hbase-prefix-trie-0.1.jar, 
> PrefixTrie_Format_v1.pdf, PrefixTrie_Performance_v1.pdf, SeeksPerSec by 
> blockSize.png
>
>
> The HBase data block format has room for 2 significant improvements for 
> applications that have high block cache hit ratios.  
> First, there is no prefix compression, and the current KeyValue format is 
> somewhat metadata heavy, so there can be tremendous memory bloat for many 
> common data layouts, specifically those with long keys and short values.
> Second, there is no random access to KeyValues inside data blocks.  This 
> means that every time you double the datablock size, average seek time (or 
> average cpu consumption) goes up by a factor of 2.  The standard 64KB block 
> size is ~10x slower for random seeks than a 4KB block size, but block sizes 
> as small as 4KB cause problems elsewhere.  Using block sizes of 256KB or 1MB 
> or more may be more efficient from a disk access and block-cache perspective 
> in many big-data applications, but doing so is infeasible from a random seek 
> perspective.
> The PrefixTrie block encoding format attempts to solve both of these 
> problems.  Some features:
> * trie format for row key encoding completely eliminates duplicate row keys 
> and encodes similar row keys into a standard trie structure which also saves 
> a lot of space
> * the column family is currently stored once at the beginning of each block.  
> this could easily be modified to allow multiple family names per block
> * all qualifiers in the block are stored in their own trie format which 
> caters nicely to wide rows.  duplicate qualifers between rows are eliminated. 
>  the size of this trie determines the width of the block's qualifier 
> fixed-width-int
> * the minimum timestamp is stored at the beginning of the block, and deltas 
> are calculated from that.  the maximum delta determines the width of the 
> block's timestamp fixed-width-int
> The block is structured with metadata at the beginning, then a section for 
> the row trie, then the column trie, then the timestamp deltas, and then then 
> all the values.  Most work is done in the row trie, where every leaf node 
> (corresponding to a row) contains a list of offsets/references corresponding 
> to the cells in that row.  Each cell is fixed-width to enable binary 
> searching and is represented by [1 byte operationType, X bytes qualifier 
> offset, X bytes timestamp delta offset].
> If all operation types are the same for a block, there will be zero per-cell 
> overhead.  Same for timestamps.  Same for qualifiers when i get a chance.  
> So, the compression aspect is very strong, but makes a few small sacrifices 
> on VarInt size to enable faster binary searches in trie fan-out nodes.
> A more compressed but slower version might build on this by also applying 
> further (suffix, etc) compression on the trie nodes at the cost of slower 
> write speed.  Even further compression could be obtained by using all VInts 
> instead of FInts with a sacrifice on random seek speed (though not huge).
> One current drawback is the current write speed.  While programmed with good 
> constructs like TreeMaps, ByteBuffers, binary searches, etc, it's not 
> programmed with the same level of optimization as the read path.  Work will 
> need to be done to optimize the data structures used for encoding and could 
> probably show a 10x increase.  It will still be slower than delta encoding, 
> but with a much higher decode speed.  I have not yet created a thorough 
> benchmark for write speed nor sequential read speed.
> Though the trie is reaching a point where it is internally very efficient 
> (probably within half or a quarter of its max read speed) the way that hbase 
> currently uses it is far from optimal.  The KeyValueScanner and related 
> classes that iterate through the trie will eventually need to be smarter and 
> have methods to do things like skipping to the next row of results without 
> scanning every cell in between.  When that is accomplished it will also allow 
> much faster compactions because the full row key will not have to be compared 
>

[jira] [Commented] (HBASE-6712) Implement checkAndIncrement

2012-09-03 Thread Stefan Baritchii (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13447417#comment-13447417
 ] 

Stefan Baritchii commented on HBASE-6712:
-

@Lars: renaming fine with me. will also take a look into the coprocessor 
approach.

> Implement checkAndIncrement
> ---
>
> Key: HBASE-6712
> URL: https://issues.apache.org/jira/browse/HBASE-6712
> Project: HBase
>  Issue Type: New Feature
>  Components: client
>Affects Versions: 0.92.1
>Reporter: Stefan Baritchii
>
> increment should throw an exception if a row does not exist. instead it 
> creates the row. checkAndIncrement may be also a solution to it but needs 
> development.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5547) Don't delete HFiles when in "backup mode"

2012-09-03 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13447408#comment-13447408
 ] 

Matteo Bertozzi commented on HBASE-5547:


Can we add preChore() and postChore() to the FileCleanerDelegate, called in the 
beginning/end of CleanerChore.chore()?
Will be useful move TimeToLiveHFileCleaner.instantiateFS() and 
SnapshotCleanerUtil.refreshCache() there.

> Don't delete HFiles when in "backup mode"
> -
>
> Key: HBASE-5547
> URL: https://issues.apache.org/jira/browse/HBASE-5547
> Project: HBase
>  Issue Type: New Feature
>Reporter: Lars Hofhansl
>Assignee: Jesse Yates
> Fix For: 0.96.0, 0.94.3
>
> Attachments: 5547.addendum-v3, 5547-addendum-v4.txt, 5547-v12.txt, 
> 5547-v16.txt, hbase-5447-v8.patch, hbase-5447-v8.patch, hbase-5547-v9.patch, 
> java_HBASE-5547.addendum, java_HBASE-5547.addendum-v1, 
> java_HBASE-5547.addendum-v2, java_HBASE-5547_v13.patch, 
> java_HBASE-5547_v14.patch, java_HBASE-5547_v15.patch, 
> java_HBASE-5547_v4.patch, java_HBASE-5547_v5.patch, java_HBASE-5547_v6.patch, 
> java_HBASE-5547_v7.patch
>
>
> This came up in a discussion I had with Stack.
> It would be nice if HBase could be notified that a backup is in progress (via 
> a znode for example) and in that case either:
> 1. rename HFiles to be delete to .bck
> 2. rename the HFiles into a special directory
> 3. rename them to a general trash directory (which would not need to be tied 
> to backup mode).
> That way it should be able to get a consistent backup based on HFiles (HDFS 
> snapshots or hard links would be better options here, but we do not have 
> those).
> #1 makes cleanup a bit harder.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6712) Implement checkAndIncrement

2012-09-03 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13447394#comment-13447394
 ] 

Lars Hofhansl commented on HBASE-6712:
--

Also note that after HBASE-6522 it is possible to do this with a coprocessor 
endpoint.
(The missing part before was the coprocessors cannot take out locks)

> Implement checkAndIncrement
> ---
>
> Key: HBASE-6712
> URL: https://issues.apache.org/jira/browse/HBASE-6712
> Project: HBase
>  Issue Type: New Feature
>  Components: client
>Affects Versions: 0.92.1
>Reporter: Stefan Baritchii
>
> increment should throw an exception if a row does not exist. instead it 
> creates the row. checkAndIncrement may be also a solution to it but needs 
> development.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6713) Stopping meta/root RS may take 50mins when it is in region-splitting

2012-09-03 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-6713:
-

Fix Version/s: 0.94.3

Targeting 0.94.3 for now. If that gets done before 0.94.2 I'll pull it in.

> Stopping meta/root RS may take 50mins when it is in region-splitting
> 
>
> Key: HBASE-6713
> URL: https://issues.apache.org/jira/browse/HBASE-6713
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.94.1
>Reporter: chunhui shen
>Assignee: chunhui shen
> Fix For: 0.94.3
>
> Attachments: HBASE-6713.patch
>
>
> When we stop the RS carrying ROOT/META, if it is in the splitting for some 
> region, the whole stopping process may take 50 mins.
> The reason is :
> 1.ROOT/META region is closed when stopping the regionserver
> 2.The Split Transaction failed updating META and it will retry
> 3.The retry num is 100, and the total time is about 50 mins as default;
> This configuration is set by 
> HConnectionManager#setServerSideHConnectionRetries
> I think 50 mins is too long to acceptable, my suggested solution is closing 
> MetaTable regions after the compact/split thread is closed

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6713) Stopping meta/root RS may take 50mins when it is in region-splitting

2012-09-03 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13447391#comment-13447391
 ] 

Lars Hofhansl commented on HBASE-6713:
--

Looks good.
Why this change:
{code}
-if (!r.getRegionInfo().isMetaRegion() && r.isAvailable()) {
+if (!r.getRegionInfo().isMetaTable() && r.isAvailable()) {
{code}
?

> Stopping meta/root RS may take 50mins when it is in region-splitting
> 
>
> Key: HBASE-6713
> URL: https://issues.apache.org/jira/browse/HBASE-6713
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.94.1
>Reporter: chunhui shen
>Assignee: chunhui shen
> Attachments: HBASE-6713.patch
>
>
> When we stop the RS carrying ROOT/META, if it is in the splitting for some 
> region, the whole stopping process may take 50 mins.
> The reason is :
> 1.ROOT/META region is closed when stopping the regionserver
> 2.The Split Transaction failed updating META and it will retry
> 3.The retry num is 100, and the total time is about 50 mins as default;
> This configuration is set by 
> HConnectionManager#setServerSideHConnectionRetries
> I think 50 mins is too long to acceptable, my suggested solution is closing 
> MetaTable regions after the compact/split thread is closed

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6712) incrementColumnValue should throw an exception if row does not exist

2012-09-03 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13447390#comment-13447390
 ] 

Lars Hofhansl commented on HBASE-6712:
--

I see. If you don't mind, I'll change the title to "Implement 
checkAndIncrement" (and maybe we'll fold this into a more general 
checkAndMutate implementation).

> incrementColumnValue should throw an exception if row does not exist 
> -
>
> Key: HBASE-6712
> URL: https://issues.apache.org/jira/browse/HBASE-6712
> Project: HBase
>  Issue Type: Bug
>  Components: client
>Affects Versions: 0.92.1
>Reporter: Stefan Baritchii
>
> increment should throw an exception if a row does not exist. instead it 
> creates the row. checkAndIncrement may be also a solution to it but needs 
> development.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6712) Implement checkAndIncrement

2012-09-03 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-6712:
-

Issue Type: New Feature  (was: Bug)
   Summary: Implement checkAndIncrement  (was: incrementColumnValue should 
throw an exception if row does not exist )

> Implement checkAndIncrement
> ---
>
> Key: HBASE-6712
> URL: https://issues.apache.org/jira/browse/HBASE-6712
> Project: HBase
>  Issue Type: New Feature
>  Components: client
>Affects Versions: 0.92.1
>Reporter: Stefan Baritchii
>
> increment should throw an exception if a row does not exist. instead it 
> creates the row. checkAndIncrement may be also a solution to it but needs 
> development.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6712) incrementColumnValue should throw an exception if row does not exist

2012-09-03 Thread Stefan Baritchii (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13447307#comment-13447307
 ] 

Stefan Baritchii commented on HBASE-6712:
-

My usecase: I have a row with a rowid and a column that is incremented. rowid 
represents a resource name, column represents how many guys are sharing that 
resource.

It would be nice if I would be able to create the row first (i do it only once 
anyway), and then increment/decrement it's sharing.

To answer the "which would definitely be bad" part I think it has already been 
done at Put where you have checkAndPut() method. Do you think it would be bad 
to have a "checkAndIncrement" method similar to checkAndPut?

> incrementColumnValue should throw an exception if row does not exist 
> -
>
> Key: HBASE-6712
> URL: https://issues.apache.org/jira/browse/HBASE-6712
> Project: HBase
>  Issue Type: Bug
>  Components: client
>Affects Versions: 0.92.1
>Reporter: Stefan Baritchii
>
> increment should throw an exception if a row does not exist. instead it 
> creates the row. checkAndIncrement may be also a solution to it but needs 
> development.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6713) Stopping meta/root RS may take 50mins when it is in region-splitting

2012-09-03 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-6713:
--

Status: Patch Available  (was: Open)

> Stopping meta/root RS may take 50mins when it is in region-splitting
> 
>
> Key: HBASE-6713
> URL: https://issues.apache.org/jira/browse/HBASE-6713
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.94.1
>Reporter: chunhui shen
>Assignee: chunhui shen
> Attachments: HBASE-6713.patch
>
>
> When we stop the RS carrying ROOT/META, if it is in the splitting for some 
> region, the whole stopping process may take 50 mins.
> The reason is :
> 1.ROOT/META region is closed when stopping the regionserver
> 2.The Split Transaction failed updating META and it will retry
> 3.The retry num is 100, and the total time is about 50 mins as default;
> This configuration is set by 
> HConnectionManager#setServerSideHConnectionRetries
> I think 50 mins is too long to acceptable, my suggested solution is closing 
> MetaTable regions after the compact/split thread is closed

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6695) [Replication] Data will lose if RegionServer down during transferqueue

2012-09-03 Thread terry zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

terry zhang updated HBASE-6695:
---

Attachment: HBASE-6695-4trunk_v2.patch

Check regionserver stopper during loop

> [Replication] Data will lose if RegionServer down during transferqueue
> --
>
> Key: HBASE-6695
> URL: https://issues.apache.org/jira/browse/HBASE-6695
> Project: HBase
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 0.94.1
>Reporter: terry zhang
>Priority: Critical
> Fix For: 0.96.0, 0.94.3
>
> Attachments: HBASE-6695-4trunk.patch, HBASE-6695-4trunk_v2.patch, 
> HBASE-6695.patch
>
>
> When we ware testing Replication failover feature we found if we kill a 
> regionserver during it transferqueue ,we found only part of the hlog znode 
> copy to the right path because failover process is interrupted. 
> Log:
> 2012-08-29 12:20:05,660 INFO 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager: 
> Moving dw92.kgb.sqa.cm4,60020,1346210789716's hlogs to my queue
> 2012-08-29 12:20:05,765 DEBUG 
> org.apache.hadoop.hbase.replication.ReplicationZookeeper: Creating 
> dw92.kgb.sqa.cm4%2C60020%2C13462107 89716.1346213720708 with data 210508162
> 2012-08-29 12:20:05,850 DEBUG 
> org.apache.hadoop.hbase.replication.ReplicationZookeeper: Creating 
> dw92.kgb.sqa.cm4%2C60020%2C13462107 89716.1346213886800 with data
> 2012-08-29 12:20:05,938 DEBUG 
> org.apache.hadoop.hbase.replication.ReplicationZookeeper: Creating 
> dw92.kgb.sqa.cm4%2C60020%2C1346210789716.1346213830559 with data
> 2012-08-29 12:20:06,055 DEBUG 
> org.apache.hadoop.hbase.replication.ReplicationZookeeper: Creating 
> dw92.kgb.sqa.cm4%2C60020%2C1346210789716.1346213775146 with data
> 2012-08-29 12:20:06,277 WARN 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: 
> Failed all from region=.ME
> TA.,,1.1028785192, hostname=dw93.kgb.sqa.cm4, port=60020
> java.util.concurrent.ExecutionException: java.net.ConnectException: 
> Connection refused
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
> at java.util.concurrent.FutureTask.get(FutureTask.java:83)
> at 
> ..
> This server is down .
> ZK node status:
> [zk: 10.232.98.77:2181(CONNECTED) 6] ls 
> /hbase-test3-repl/replication/rs/dw92.kgb.sqa.cm4,60020,1346210789716
> [lock, 1, 1-dw89.kgb.sqa.cm4,60020,1346202436268]
>  
> dw92 is down , but Node dw92.kgb.sqa.cm4,60020,1346210789716 can't be deleted

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-1936) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services

2012-09-03 Thread Jieshan Bean (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13447150#comment-13447150
 ] 

Jieshan Bean commented on HBASE-1936:
-

Sorry, some other higher priority tasks delayed my plan on this feature. I will 
do it at this weekend, how about that?

> ClassLoader that loads from hdfs; useful adding filters to classpath without 
> having to restart services
> ---
>
> Key: HBASE-1936
> URL: https://issues.apache.org/jira/browse/HBASE-1936
> Project: HBase
>  Issue Type: New Feature
>Reporter: stack
>Assignee: Jieshan Bean
>  Labels: noob
> Attachments: cp_from_hdfs.patch, HBASE-1936-trunk(forReview).patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira