[jira] [Updated] (HBASE-8218) Pass HConnection and ExecutorService as parameters to methods of AggregationClient

2013-04-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-8218:
--

Fix Version/s: 0.98.0
  Summary: Pass HConnection and ExecutorService as parameters to 
methods of AggregationClient  (was: pass HTable as a parameter to method of 
AggregationClient)

> Pass HConnection and ExecutorService as parameters to methods of 
> AggregationClient
> --
>
> Key: HBASE-8218
> URL: https://issues.apache.org/jira/browse/HBASE-8218
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Coprocessors
>Affects Versions: 0.94.3
>Reporter: cuijianwei
> Fix For: 0.98.0
>
> Attachments: HBASE-8218-0.94.3-v1.txt, HBASE-8218-0.94.3-v2.txt
>
>
> In AggregationClient, methods such as max(...), min(...) pass 'tableName' as 
> a parameter, then a HTable will be created in the method, before the method 
> return, the created HTable will be closed.
> The process above may be heavy because each call must create and close a 
> HTable. The situation becomes worse when there is only one thread access 
> HBase using AggregationClient. The underly HConnection of created HTable will 
> also be created and then closed each time when we invoke these method because 
> no other HTables using the HConnection. This operation is heavy. Therefore, 
> can we add another group of methods which pass HTable or HTablePool as a 
> parameter to methods defined in AggregationClient? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8218) pass HTable as a parameter to method of AggregationClient

2013-04-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629850#comment-13629850
 ] 

Ted Yu commented on HBASE-8218:
---

Thanks for the quick response.
{code}
+  this.threadPool.awaitTermination(Long.MAX_VALUE, TimeUnit.SECONDS);
{code}
Is the above waiting period too long ? How about waiting for 10 min ?
If awaitTermination() returns false, you should call this method:
http://docs.oracle.com/javase/6/docs/api/java/util/concurrent/ExecutorService.html#shutdownNow()

Please generate next patch from trunk so that hadoop QA can test the patch.

> pass HTable as a parameter to method of AggregationClient
> -
>
> Key: HBASE-8218
> URL: https://issues.apache.org/jira/browse/HBASE-8218
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Coprocessors
>Affects Versions: 0.94.3
>Reporter: cuijianwei
> Attachments: HBASE-8218-0.94.3-v1.txt, HBASE-8218-0.94.3-v2.txt
>
>
> In AggregationClient, methods such as max(...), min(...) pass 'tableName' as 
> a parameter, then a HTable will be created in the method, before the method 
> return, the created HTable will be closed.
> The process above may be heavy because each call must create and close a 
> HTable. The situation becomes worse when there is only one thread access 
> HBase using AggregationClient. The underly HConnection of created HTable will 
> also be created and then closed each time when we invoke these method because 
> no other HTables using the HConnection. This operation is heavy. Therefore, 
> can we add another group of methods which pass HTable or HTablePool as a 
> parameter to methods defined in AggregationClient? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8317) Seek returns wrong result with PREFIX_TREE Encoding

2013-04-11 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629849#comment-13629849
 ] 

chunhui shen commented on HBASE-8317:
-

bq.Please also log the seed so that we can reproduce issue if this test ever 
fails.
If the test fails, we will dump all the test KeyValues by 
TestPrefixTreeEncoding#dumpInputKVSet()

> Seek returns wrong result with PREFIX_TREE Encoding
> ---
>
> Key: HBASE-8317
> URL: https://issues.apache.org/jira/browse/HBASE-8317
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.0
>Reporter: chunhui shen
>Assignee: chunhui shen
> Attachments: HBASE-8317-v1.patch, hbase-trunk-8317.patch, 
> hbase-trunk-8317v3.patch
>
>
> TestPrefixTreeEncoding#testSeekWithFixedData from the patch could reproduce 
> the bug.
> An example of the bug case:
> Suppose the following rows:
> 1.row3/c1:q1/
> 2.row3/c1:q2/
> 3.row3/c1:q3/
> 4.row4/c1:q1/
> 5.row4/c1:q2/
> After seeking the row 'row30', the expected peek KV is row4/c1:q1/, but 
> actual is row3/c1:q1/.
> I just fix this bug case in the patch, 
> Maybe we can do more for other potential problems if anyone is familiar with 
> the code of PREFIX_TREE

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8143) HBase on Hadoop 2 with local short circuit reads (ssr) causes OOM

2013-04-11 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629848#comment-13629848
 ] 

Enis Soztutar commented on HBASE-8143:
--

Not yet, but this is in my radar. We know that the issue is with the buffer 
size. We just have to test with a smaller size to see whether there is any 
performance impact. 

> HBase on Hadoop 2 with local short circuit reads (ssr) causes OOM 
> --
>
> Key: HBASE-8143
> URL: https://issues.apache.org/jira/browse/HBASE-8143
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop2
>Affects Versions: 0.98.0, 0.94.7, 0.95.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 0.98.0, 0.94.8, 0.95.1
>
> Attachments: OpenFileTest.java
>
>
> We've run into an issue with HBase 0.94 on Hadoop2, with SSR turned on that 
> the memory usage of the HBase process grows to 7g, on an -Xmx3g, after some 
> time, this causes OOM for the RSs. 
> Upon further investigation, I've found out that we end up with 200 regions, 
> each having 3-4 store files open. Under hadoop2 SSR, BlockReaderLocal 
> allocates DirectBuffers, which is unlike HDFS 1 where there is no direct 
> buffer allocation. 
> It seems that there is no guards against the memory used by local buffers in 
> hdfs 2, and having a large number of open files causes multiple GB of memory 
> to be consumed from the RS process. 
> This issue is to further investigate what is going on. Whether we can limit 
> the memory usage in HDFS, or HBase, and/or document the setup. 
> Possible mitigation scenarios are: 
>  - Turn off SSR for Hadoop 2
>  - Ensure that there is enough unallocated memory for the RS based on 
> expected # of store files
>  - Ensure that there is lower number of regions per region server (hence 
> number of open files)
> Stack trace:
> {code}
> org.apache.hadoop.hbase.DroppedSnapshotException: region: 
> IntegrationTestLoadAndVerify,yC^P\xD7\x945\xD4,1363388517630.24655343d8d356ef708732f34cfe8946.
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1560)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1439)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1380)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:449)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushOneForGlobalPressure(MemStoreFlusher.java:215)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$500(MemStoreFlusher.java:63)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:237)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:632)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:97)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
> at 
> org.apache.hadoop.hdfs.util.DirectBufferPool.getBuffer(DirectBufferPool.java:70)
> at 
> org.apache.hadoop.hdfs.BlockReaderLocal.(BlockReaderLocal.java:315)
> at 
> org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:208)
> at 
> org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689)
> at java.io.DataInputStream.readFully(DataInputStream.java:178)
> at 
> org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:312)
> at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:543)
> at 
> org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:589)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1261)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:512)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:603)
> at 
> org.apache.hadoop.hbase.regionserver.Store.validateStoreFile(Store.java:1568)
> at 
> org.apache.hadoop.hbase.regionserver.Store.commitFile(Store.java:845)
> at 
> org.apache.hadoop.hbase.regionserver.Store.access$500(Store.java:109)
> at 
> org.apache.hadoop.hbase.regionserver.Store$StoreFlusherImpl.commit(Store.j

[jira] [Commented] (HBASE-8218) pass HTable as a parameter to method of AggregationClient

2013-04-11 Thread cuijianwei (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629845#comment-13629845
 ] 

cuijianwei commented on HBASE-8218:
---

Thanks for your concern. I make a new patch to introduce a 'close' flag. I 
think it is more reasonable to wait all executing tasks finish before returning 
from close(), so that I invoke 'awaitTermination' after 'shutdown'。I invoke 
'close()' in corresponding unit test of 'max(...)', it works locally.

> pass HTable as a parameter to method of AggregationClient
> -
>
> Key: HBASE-8218
> URL: https://issues.apache.org/jira/browse/HBASE-8218
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Coprocessors
>Affects Versions: 0.94.3
>Reporter: cuijianwei
> Attachments: HBASE-8218-0.94.3-v1.txt, HBASE-8218-0.94.3-v2.txt
>
>
> In AggregationClient, methods such as max(...), min(...) pass 'tableName' as 
> a parameter, then a HTable will be created in the method, before the method 
> return, the created HTable will be closed.
> The process above may be heavy because each call must create and close a 
> HTable. The situation becomes worse when there is only one thread access 
> HBase using AggregationClient. The underly HConnection of created HTable will 
> also be created and then closed each time when we invoke these method because 
> no other HTables using the HConnection. This operation is heavy. Therefore, 
> can we add another group of methods which pass HTable or HTablePool as a 
> parameter to methods defined in AggregationClient? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8317) Seek returns wrong result with PREFIX_TREE Encoding

2013-04-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629844#comment-13629844
 ] 

Ted Yu commented on HBASE-8317:
---

The new test is a small test. I think we can keep it.
{code}
+Random random = new Random();
{code}
Can a seed be used above ? Please also log the seed so that we can reproduce 
issue if this test ever fails.

> Seek returns wrong result with PREFIX_TREE Encoding
> ---
>
> Key: HBASE-8317
> URL: https://issues.apache.org/jira/browse/HBASE-8317
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.0
>Reporter: chunhui shen
>Assignee: chunhui shen
> Attachments: HBASE-8317-v1.patch, hbase-trunk-8317.patch, 
> hbase-trunk-8317v3.patch
>
>
> TestPrefixTreeEncoding#testSeekWithFixedData from the patch could reproduce 
> the bug.
> An example of the bug case:
> Suppose the following rows:
> 1.row3/c1:q1/
> 2.row3/c1:q2/
> 3.row3/c1:q3/
> 4.row4/c1:q1/
> 5.row4/c1:q2/
> After seeking the row 'row30', the expected peek KV is row4/c1:q1/, but 
> actual is row3/c1:q1/.
> I just fix this bug case in the patch, 
> Maybe we can do more for other potential problems if anyone is familiar with 
> the code of PREFIX_TREE

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8218) pass HTable as a parameter to method of AggregationClient

2013-04-11 Thread cuijianwei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

cuijianwei updated HBASE-8218:
--

Attachment: HBASE-8218-0.94.3-v2.txt

add 'closed' status, and wait until all executing task finished in 'close()'

> pass HTable as a parameter to method of AggregationClient
> -
>
> Key: HBASE-8218
> URL: https://issues.apache.org/jira/browse/HBASE-8218
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Coprocessors
>Affects Versions: 0.94.3
>Reporter: cuijianwei
> Attachments: HBASE-8218-0.94.3-v1.txt, HBASE-8218-0.94.3-v2.txt
>
>
> In AggregationClient, methods such as max(...), min(...) pass 'tableName' as 
> a parameter, then a HTable will be created in the method, before the method 
> return, the created HTable will be closed.
> The process above may be heavy because each call must create and close a 
> HTable. The situation becomes worse when there is only one thread access 
> HBase using AggregationClient. The underly HConnection of created HTable will 
> also be created and then closed each time when we invoke these method because 
> no other HTables using the HConnection. This operation is heavy. Therefore, 
> can we add another group of methods which pass HTable or HTablePool as a 
> parameter to methods defined in AggregationClient? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8330) What is the necessity of having a private ThreadLocal in FSReaderV2

2013-04-11 Thread Manukranth Kolloju (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629840#comment-13629840
 ] 

Manukranth Kolloju commented on HBASE-8330:
---

I can add more details if anyone wants to discuss this issue.

> What is the necessity of having a private ThreadLocal in FSReaderV2
> ---
>
> Key: HBASE-8330
> URL: https://issues.apache.org/jira/browse/HBASE-8330
> Project: HBase
>  Issue Type: Brainstorming
>  Components: HFile
>Affects Versions: 0.89-fb
>Reporter: Manukranth Kolloju
>Assignee: Manukranth Kolloju
>Priority: Minor
> Fix For: 0.89-fb
>
>
> I was trying to investigate the scenarios in which we perform a seek back of 
> 24 bytes(Header size) while we do a HFileBlock read. In the process I 
> stumbled upon this issue. In order to avoid the seek back problem, what we do 
> is to store the header of the next block in a class named PrefetchedHeader. 
> This prefetched header is stored as a private ThreadLocal object in the 
> FSReaderV2 class. I was wondering why we would be needing a ThreadLocalc when 
> each FSReader object has its own PrefetchedHeader object and moreover if its 
> private. Can anybody familiar with this part of the code tell me what was the 
> design decision that was taken at that time?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-8330) What is the necessity of having a private ThreadLocal in FSReaderV2

2013-04-11 Thread Manukranth Kolloju (JIRA)
Manukranth Kolloju created HBASE-8330:
-

 Summary: What is the necessity of having a private ThreadLocal in 
FSReaderV2
 Key: HBASE-8330
 URL: https://issues.apache.org/jira/browse/HBASE-8330
 Project: HBase
  Issue Type: Brainstorming
  Components: HFile
Affects Versions: 0.89-fb
Reporter: Manukranth Kolloju
Assignee: Manukranth Kolloju
Priority: Minor
 Fix For: 0.89-fb


I was trying to investigate the scenarios in which we perform a seek back of 24 
bytes(Header size) while we do a HFileBlock read. In the process I stumbled 
upon this issue. In order to avoid the seek back problem, what we do is to 
store the header of the next block in a class named PrefetchedHeader. This 
prefetched header is stored as a private ThreadLocal object in the FSReaderV2 
class. I was wondering why we would be needing a ThreadLocalc when each 
FSReader object has its own PrefetchedHeader object and moreover if its 
private. Can anybody familiar with this part of the code tell me what was the 
design decision that was taken at that time?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8306) Enhance TestJoinedScanners with ability to simulate more scenarios

2013-04-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-8306:
--

Attachment: 8306-v5.txt

> Enhance TestJoinedScanners with ability to simulate more scenarios
> --
>
> Key: HBASE-8306
> URL: https://issues.apache.org/jira/browse/HBASE-8306
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 8306-v1.txt, 8306-v5.txt
>
>
> Currently TestJoinedScanners uses fixed lengths of values for essential and 
> non-essential column families.
> The selection rate of SingleColumnValueFilter is fixed and distribution of 
> selected rows forms stripes.
> TestJoinedScanners can be enhanced in the following ways:
> 1. main() can be introduced so that the test can be run standalone
> 2. selection ratio can be specified by user
> 3. distribution of selected rows should be random
> 4. user should be able to specify data block encoding for the column families

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8306) Enhance TestJoinedScanners with ability to simulate more scenarios

2013-04-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-8306:
--

Attachment: (was: 8306-v5.txt)

> Enhance TestJoinedScanners with ability to simulate more scenarios
> --
>
> Key: HBASE-8306
> URL: https://issues.apache.org/jira/browse/HBASE-8306
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 8306-v1.txt
>
>
> Currently TestJoinedScanners uses fixed lengths of values for essential and 
> non-essential column families.
> The selection rate of SingleColumnValueFilter is fixed and distribution of 
> selected rows forms stripes.
> TestJoinedScanners can be enhanced in the following ways:
> 1. main() can be introduced so that the test can be run standalone
> 2. selection ratio can be specified by user
> 3. distribution of selected rows should be random
> 4. user should be able to specify data block encoding for the column families

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8317) Seek returns wrong result with PREFIX_TREE Encoding

2013-04-11 Thread Matt Corgan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629833#comment-13629833
 ] 

Matt Corgan commented on HBASE-8317:


Should be ok to commit both tests.  I think it's important to have the 
fine-grained tests in the prefix-tree module to prove its correctness with the 
simplest test possible, but those tests only verify the prefix-tree code.  
Tests in hbase-server can cover all encoding types.

The downside to building up redundant tests in hbase-server is that we will 
always be afraid to remove them or modify them, creating a maintenance 
struggle.  We should keep it I think, but be conscious of the fact that too 
course-grained of a test suite may slow us down later.

> Seek returns wrong result with PREFIX_TREE Encoding
> ---
>
> Key: HBASE-8317
> URL: https://issues.apache.org/jira/browse/HBASE-8317
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.0
>Reporter: chunhui shen
>Assignee: chunhui shen
> Attachments: HBASE-8317-v1.patch, hbase-trunk-8317.patch, 
> hbase-trunk-8317v3.patch
>
>
> TestPrefixTreeEncoding#testSeekWithFixedData from the patch could reproduce 
> the bug.
> An example of the bug case:
> Suppose the following rows:
> 1.row3/c1:q1/
> 2.row3/c1:q2/
> 3.row3/c1:q3/
> 4.row4/c1:q1/
> 5.row4/c1:q2/
> After seeking the row 'row30', the expected peek KV is row4/c1:q1/, but 
> actual is row3/c1:q1/.
> I just fix this bug case in the patch, 
> Maybe we can do more for other potential problems if anyone is familiar with 
> the code of PREFIX_TREE

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8306) Enhance TestJoinedScanners with ability to simulate more scenarios

2013-04-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-8306:
--

Attachment: 8306-v5.txt

Due to usage of FAST_DIFF encoding, the runtime of TestJoinedScanners increased 
from 5 min to 6.1 min.

Patch v5 reduces the number of iterations runScanner() is run.

> Enhance TestJoinedScanners with ability to simulate more scenarios
> --
>
> Key: HBASE-8306
> URL: https://issues.apache.org/jira/browse/HBASE-8306
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 8306-v1.txt, 8306-v5.txt
>
>
> Currently TestJoinedScanners uses fixed lengths of values for essential and 
> non-essential column families.
> The selection rate of SingleColumnValueFilter is fixed and distribution of 
> selected rows forms stripes.
> TestJoinedScanners can be enhanced in the following ways:
> 1. main() can be introduced so that the test can be run standalone
> 2. selection ratio can be specified by user
> 3. distribution of selected rows should be random
> 4. user should be able to specify data block encoding for the column families

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8306) Enhance TestJoinedScanners with ability to simulate more scenarios

2013-04-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-8306:
--

Attachment: (was: 8306-v4.txt)

> Enhance TestJoinedScanners with ability to simulate more scenarios
> --
>
> Key: HBASE-8306
> URL: https://issues.apache.org/jira/browse/HBASE-8306
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 8306-v1.txt
>
>
> Currently TestJoinedScanners uses fixed lengths of values for essential and 
> non-essential column families.
> The selection rate of SingleColumnValueFilter is fixed and distribution of 
> selected rows forms stripes.
> TestJoinedScanners can be enhanced in the following ways:
> 1. main() can be introduced so that the test can be run standalone
> 2. selection ratio can be specified by user
> 3. distribution of selected rows should be random
> 4. user should be able to specify data block encoding for the column families

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7255) KV size metric went missing from StoreScanner.

2013-04-11 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629828#comment-13629828
 ] 

Elliott Clark commented on HBASE-7255:
--

Nope.  The MetricsStat type keeps track of min/max/count so it adds the 
appropriate (NumOps, Min, Max) suffix on for each number.

> KV size metric went missing from StoreScanner.
> --
>
> Key: HBASE-7255
> URL: https://issues.apache.org/jira/browse/HBASE-7255
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.95.1
>
> Attachments: HBASE-7255-0.patch, HBASE-7255-1.patch, 
> HBASE-7255-2.patch, HBASE-7255-3.patch, HBASE-7255-4.patch
>
>
> In trunk due to the metric refactor, at least the KV size metric went missing.
> See this code in StoreScanner.java:
> {code}
> } finally {
>   if (cumulativeMetric > 0 && metric != null) {
>   }
> }
> {code}
> Just an empty if statement, where the metric used to be collected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8067) TestHFileArchiving.testArchiveOnTableDelete sometimes fails

2013-04-11 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-8067:
-

Fix Version/s: (was: 0.94.7)
   0.94.8

Will do in the beginning of the 0.94.8 cycle.

> TestHFileArchiving.testArchiveOnTableDelete sometimes fails
> ---
>
> Key: HBASE-8067
> URL: https://issues.apache.org/jira/browse/HBASE-8067
> Project: HBase
>  Issue Type: Bug
>  Components: Admin, master, test
>Affects Versions: 0.94.6, 0.95.2
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 0.98.0, 0.94.8, 0.95.1
>
> Attachments: 8067-0.94.txt, HBASE-8067-debug.patch, 
> HBASE-8067-v0.patch
>
>
> it seems that testArchiveOnTableDelete() fails because the archiving in 
> DeleteTableHandler is still in progress when admin.deleteTable() returns.
> {code}
> Error Message
> Archived files are missing some of the store files!
> Stacktrace
> java.lang.AssertionError: Archived files are missing some of the store files!
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.hbase.backup.TestHFileArchiving.testArchiveOnTableDelete(TestHFileArchiving.java:262)
> {code}
> (Looking at the problem in a more generic way, we don't have any way to 
> inform the client when an async operation is completed)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8261) Backport HBASE-7718 TestClassLoading needs to consider runtime classpath in buildCoprocessorJar

2013-04-11 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-8261:
-

Fix Version/s: (was: 0.94.7)
   0.94.8

Running the test from the command line fails with the patch (and passed 
without). Moving to 0.94.8 for now.

> Backport HBASE-7718 TestClassLoading needs to consider runtime classpath in 
> buildCoprocessorJar
> ---
>
> Key: HBASE-8261
> URL: https://issues.apache.org/jira/browse/HBASE-8261
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: Jean-Marc Spaggiari
> Fix For: 0.94.8
>
>
> See patch attached to HBASE-7718 for 0.94.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8306) Enhance TestJoinedScanners with ability to simulate more scenarios

2013-04-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-8306:
--

Attachment: (was: 8306-v4.txt)

> Enhance TestJoinedScanners with ability to simulate more scenarios
> --
>
> Key: HBASE-8306
> URL: https://issues.apache.org/jira/browse/HBASE-8306
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 8306-v1.txt, 8306-v4.txt
>
>
> Currently TestJoinedScanners uses fixed lengths of values for essential and 
> non-essential column families.
> The selection rate of SingleColumnValueFilter is fixed and distribution of 
> selected rows forms stripes.
> TestJoinedScanners can be enhanced in the following ways:
> 1. main() can be introduced so that the test can be run standalone
> 2. selection ratio can be specified by user
> 3. distribution of selected rows should be random
> 4. user should be able to specify data block encoding for the column families

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8306) Enhance TestJoinedScanners with ability to simulate more scenarios

2013-04-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-8306:
--

Attachment: 8306-v4.txt

> Enhance TestJoinedScanners with ability to simulate more scenarios
> --
>
> Key: HBASE-8306
> URL: https://issues.apache.org/jira/browse/HBASE-8306
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 8306-v1.txt, 8306-v4.txt
>
>
> Currently TestJoinedScanners uses fixed lengths of values for essential and 
> non-essential column families.
> The selection rate of SingleColumnValueFilter is fixed and distribution of 
> selected rows forms stripes.
> TestJoinedScanners can be enhanced in the following ways:
> 1. main() can be introduced so that the test can be run standalone
> 2. selection ratio can be specified by user
> 3. distribution of selected rows should be random
> 4. user should be able to specify data block encoding for the column families

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8261) Backport HBASE-7718 TestClassLoading needs to consider runtime classpath in buildCoprocessorJar

2013-04-11 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-8261:
-

Description: See patch attached to HBASE-7718 for 0.94.  (was: See patch 
attached to hbase-7718 for 0.94.)

> Backport HBASE-7718 TestClassLoading needs to consider runtime classpath in 
> buildCoprocessorJar
> ---
>
> Key: HBASE-8261
> URL: https://issues.apache.org/jira/browse/HBASE-8261
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: Jean-Marc Spaggiari
> Fix For: 0.94.7
>
>
> See patch attached to HBASE-7718 for 0.94.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8143) HBase on Hadoop 2 with local short circuit reads (ssr) causes OOM

2013-04-11 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-8143:
-

Fix Version/s: (was: 0.94.7)
   0.94.8

Moving out to 0.94.8

> HBase on Hadoop 2 with local short circuit reads (ssr) causes OOM 
> --
>
> Key: HBASE-8143
> URL: https://issues.apache.org/jira/browse/HBASE-8143
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop2
>Affects Versions: 0.98.0, 0.94.7, 0.95.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 0.98.0, 0.94.8, 0.95.1
>
> Attachments: OpenFileTest.java
>
>
> We've run into an issue with HBase 0.94 on Hadoop2, with SSR turned on that 
> the memory usage of the HBase process grows to 7g, on an -Xmx3g, after some 
> time, this causes OOM for the RSs. 
> Upon further investigation, I've found out that we end up with 200 regions, 
> each having 3-4 store files open. Under hadoop2 SSR, BlockReaderLocal 
> allocates DirectBuffers, which is unlike HDFS 1 where there is no direct 
> buffer allocation. 
> It seems that there is no guards against the memory used by local buffers in 
> hdfs 2, and having a large number of open files causes multiple GB of memory 
> to be consumed from the RS process. 
> This issue is to further investigate what is going on. Whether we can limit 
> the memory usage in HDFS, or HBase, and/or document the setup. 
> Possible mitigation scenarios are: 
>  - Turn off SSR for Hadoop 2
>  - Ensure that there is enough unallocated memory for the RS based on 
> expected # of store files
>  - Ensure that there is lower number of regions per region server (hence 
> number of open files)
> Stack trace:
> {code}
> org.apache.hadoop.hbase.DroppedSnapshotException: region: 
> IntegrationTestLoadAndVerify,yC^P\xD7\x945\xD4,1363388517630.24655343d8d356ef708732f34cfe8946.
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1560)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1439)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1380)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:449)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushOneForGlobalPressure(MemStoreFlusher.java:215)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$500(MemStoreFlusher.java:63)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:237)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:632)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:97)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
> at 
> org.apache.hadoop.hdfs.util.DirectBufferPool.getBuffer(DirectBufferPool.java:70)
> at 
> org.apache.hadoop.hdfs.BlockReaderLocal.(BlockReaderLocal.java:315)
> at 
> org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:208)
> at 
> org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689)
> at java.io.DataInputStream.readFully(DataInputStream.java:178)
> at 
> org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:312)
> at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:543)
> at 
> org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:589)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1261)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:512)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:603)
> at 
> org.apache.hadoop.hbase.regionserver.Store.validateStoreFile(Store.java:1568)
> at 
> org.apache.hadoop.hbase.regionserver.Store.commitFile(Store.java:845)
> at 
> org.apache.hadoop.hbase.regionserver.Store.access$500(Store.java:109)
> at 
> org.apache.hadoop.hbase.regionserver.Store$StoreFlusherImpl.commit(Store.java:2209)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1541)
> {code}

--
This message is automati

[jira] [Commented] (HBASE-3787) Increment is non-idempotent but client retries RPC

2013-04-11 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629812#comment-13629812
 ] 

stack commented on HBASE-3787:
--

Patch looking good.

Add a comment that says what the result looks like when:

generateClientId

happens.

Why not just a UUID rather than all these gyrations?  Or do you want to make it 
so that looking at id, you can tell what client it came from?  It looks like 
you throw away all this info when you create the SecureRandom?  Creating a 
SecureRandom for this one time use is expensive.

Client id should be long since in proto is uint64 in proto?

Does ClientNonceManager have to be in top-level?  Can it not be in client 
package and be made package private?

Does it make sense putting clientid together w/ nonce making?  Could you have a 
class that does noncemaking and then another to hold the clientid?  Is clientid 
tied to Connection?  Can you get connectionid?  Or make a connectionid?  
Connections are keyed by Configuration already?  Would the Connection key do as 
a clientid?

Would it be easier or make it so you could shut down access on 
ClientNonceManager by passing in the id only rather than the whole nonce when 
you do this:

   MutateRequest request = RequestConverter.buildMutateRequest(
-location.getRegionInfo().getRegionName(), append);
+location.getRegionInfo().getRegionName(), append, clientId, 
nonce);


So, you decided to not pass nonce in here:

+r = region.append(append, append.getWriteToWAL()/*, clientId2, 
nonce*/);

I like the way this works over on the server side.

You dup code in append and increment.

Good stuff Sergey.

> Increment is non-idempotent but client retries RPC
> --
>
> Key: HBASE-3787
> URL: https://issues.apache.org/jira/browse/HBASE-3787
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.94.4, 0.95.2
>Reporter: dhruba borthakur
>Assignee: Sergey Shelukhin
>Priority: Critical
> Fix For: 0.95.1
>
> Attachments: HBASE-3787-partial.patch
>
>
> The HTable.increment() operation is non-idempotent. The client retries the 
> increment RPC a few times (as specified by configuration) before throwing an 
> error to the application. This makes it possible that the same increment call 
> be applied twice at the server.
> For increment operations, is it better to use 
> HConnectionManager.getRegionServerWithoutRetries()? Another  option would be 
> to enhance the IPC module to make the RPC server correctly identify if the 
> RPC is a retry attempt and handle accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7801) Allow a deferred sync option per Mutation.

2013-04-11 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-7801:
-

Attachment: 7801-0.94-v4.txt

New API for 0.94.
Caveats:
* In 0.94 we still have HRegion.put(Put), which does not honor deferred sync, I 
did not fix that.
* Because of that checkAndPut does not honor deferred sync either.

These are existing problems.
This should be wire and binary compatible. Please have a close look.

> Allow a deferred sync option per Mutation.
> --
>
> Key: HBASE-7801
> URL: https://issues.apache.org/jira/browse/HBASE-7801
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.94.6, 0.95.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.98.0, 0.94.7, 0.95.1
>
> Attachments: 7801-0.94-v1.txt, 7801-0.94-v2.txt, 7801-0.94-v3.txt, 
> 7801-0.94-v4.txt, 7801-0.96-full-v2.txt, 7801-0.96-full-v3.txt, 
> 7801-0.96-full-v4.txt, 7801-0.96-full-v5.txt, 7801-0.96-v10.txt, 
> 7801-0.96-v1.txt, 7801-0.96-v6.txt, 7801-0.96-v7.txt, 7801-0.96-v8.txt, 
> 7801-0.96-v9.txt
>
>
> Won't have time for parent. But a deferred sync option on a per operation 
> basis comes up quite frequently.
> In 0.96 this can be handled cleanly via protobufs and 0.94 we can have a 
> special mutation attribute.
> For batch operation we'd take the safest sync option of any of the mutations. 
> I.e. if there is at least one that wants to be flushed we'd sync the batch, 
> if there's none of those but at least one that wants deferred flush we defer 
> flush the batch, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8303) Increse the test timeout to 60s when they are less than 20s

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629808#comment-13629808
 ] 

Hudson commented on HBASE-8303:
---

Integrated in HBase-0.94 #958 (See 
[https://builds.apache.org/job/HBase-0.94/958/])
HBASE-8303. Increase the test timeout to 60s when they are less than 20s 
(Revision 1467157)

 Result = SUCCESS
apurtell : 
Files : 
* /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/TestZooKeeper.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotFromAdmin.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotsFromAdmin.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/constraint/TestConstraint.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/errorhandling/TestTimeoutExceptionInjector.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/ipc/TestPBOnWritableRpc.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManager.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestSnapshotFromMaster.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/master/handler/TestCreateTableHandler.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/procedure/TestProcedure.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/procedure/TestProcedureCoordinator.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/procedure/TestProcedureMember.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/procedure/TestZKProcedureControllers.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/snapshot/TestFlushSnapshotFromClient.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/thrift/TestCallQueue.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/util/TestThreads.java


> Increse the test timeout to 60s when they are less than 20s
> ---
>
> Key: HBASE-8303
> URL: https://issues.apache.org/jira/browse/HBASE-8303
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.7, 0.95.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.94.7, 0.95.1
>
> Attachments: 8303-0.94.patch, 8303.v1.patch, 8303.v1.patch
>
>
> Short test timeouts are dangerous because:
>  - if the test is executed in the same jvm as another, GC, thread priority 
> can play a role
>  - we don't know the machine used to execute the tests, nor what's running on 
> it;.
> For this reason, a test timeout of 60s allows us to be on the safe side.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7801) Allow a deferred sync option per Mutation.

2013-04-11 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-7801:
-

Status: Open  (was: Patch Available)

> Allow a deferred sync option per Mutation.
> --
>
> Key: HBASE-7801
> URL: https://issues.apache.org/jira/browse/HBASE-7801
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.95.0, 0.94.6
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.98.0, 0.94.7, 0.95.1
>
> Attachments: 7801-0.94-v1.txt, 7801-0.94-v2.txt, 7801-0.94-v3.txt, 
> 7801-0.96-full-v2.txt, 7801-0.96-full-v3.txt, 7801-0.96-full-v4.txt, 
> 7801-0.96-full-v5.txt, 7801-0.96-v10.txt, 7801-0.96-v1.txt, 7801-0.96-v6.txt, 
> 7801-0.96-v7.txt, 7801-0.96-v8.txt, 7801-0.96-v9.txt
>
>
> Won't have time for parent. But a deferred sync option on a per operation 
> basis comes up quite frequently.
> In 0.96 this can be handled cleanly via protobufs and 0.94 we can have a 
> special mutation attribute.
> For batch operation we'd take the safest sync option of any of the mutations. 
> I.e. if there is at least one that wants to be flushed we'd sync the batch, 
> if there's none of those but at least one that wants deferred flush we defer 
> flush the batch, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7255) KV size metric went missing from StoreScanner.

2013-04-11 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629796#comment-13629796
 ] 

stack commented on HBASE-7255:
--

The suffix is not needed or it is made in the newStat call?

{code}
-regionGetKey = regionNamePrefix + MetricsRegionServerSource.GET_KEY + 
suffix;
-regionGet = registry.getLongCounter(regionGetKey, 0l);

+regionGetKey = regionNamePrefix + MetricsRegionServerSource.GET_KEY;
+regionGet = registry.newStat(regionGetKey, "", OPS_SAMPLE_NAME, 
SIZE_VALUE_NAME);
{code}

Otherwise looks good to me Mr. Elliott.

> KV size metric went missing from StoreScanner.
> --
>
> Key: HBASE-7255
> URL: https://issues.apache.org/jira/browse/HBASE-7255
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.95.1
>
> Attachments: HBASE-7255-0.patch, HBASE-7255-1.patch, 
> HBASE-7255-2.patch, HBASE-7255-3.patch, HBASE-7255-4.patch
>
>
> In trunk due to the metric refactor, at least the KV size metric went missing.
> See this code in StoreScanner.java:
> {code}
> } finally {
>   if (cumulativeMetric > 0 && metric != null) {
>   }
> }
> {code}
> Just an empty if statement, where the metric used to be collected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-3787) Increment is non-idempotent but client retries RPC

2013-04-11 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629790#comment-13629790
 ] 

ramkrishna.s.vasudevan commented on HBASE-3787:
---

I think if we use WALEdit here, we should store the nonce and also the 
incremented value.
If the RS goes down after adding this WALEdit, we should be able to just replay 
the edit and use the value from this Edit. And i think before the client 
retries using the same nonce we should ensure if that RS to which the previous 
nonce was issued was down.

If this is the case the client's retry can be ignored.



> Increment is non-idempotent but client retries RPC
> --
>
> Key: HBASE-3787
> URL: https://issues.apache.org/jira/browse/HBASE-3787
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.94.4, 0.95.2
>Reporter: dhruba borthakur
>Assignee: Sergey Shelukhin
>Priority: Critical
> Fix For: 0.95.1
>
> Attachments: HBASE-3787-partial.patch
>
>
> The HTable.increment() operation is non-idempotent. The client retries the 
> increment RPC a few times (as specified by configuration) before throwing an 
> error to the application. This makes it possible that the same increment call 
> be applied twice at the server.
> For increment operations, is it better to use 
> HConnectionManager.getRegionServerWithoutRetries()? Another  option would be 
> to enhance the IPC module to make the RPC server correctly identify if the 
> RPC is a retry attempt and handle accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7255) KV size metric went missing from StoreScanner.

2013-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629785#comment-13629785
 ] 

Hadoop QA commented on HBASE-7255:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12578349/HBASE-7255-4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 12 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5282//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5282//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5282//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5282//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5282//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5282//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5282//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5282//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5282//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5282//console

This message is automatically generated.

> KV size metric went missing from StoreScanner.
> --
>
> Key: HBASE-7255
> URL: https://issues.apache.org/jira/browse/HBASE-7255
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.95.1
>
> Attachments: HBASE-7255-0.patch, HBASE-7255-1.patch, 
> HBASE-7255-2.patch, HBASE-7255-3.patch, HBASE-7255-4.patch
>
>
> In trunk due to the metric refactor, at least the KV size metric went missing.
> See this code in StoreScanner.java:
> {code}
> } finally {
>   if (cumulativeMetric > 0 && metric != null) {
>   }
> }
> {code}
> Just an empty if statement, where the metric used to be collected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8306) Enhance TestJoinedScanners with ability to simulate more scenarios

2013-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629778#comment-13629778
 ] 

Hadoop QA commented on HBASE-8306:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12578346/8306-v4.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5281//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5281//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5281//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5281//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5281//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5281//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5281//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5281//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5281//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5281//console

This message is automatically generated.

> Enhance TestJoinedScanners with ability to simulate more scenarios
> --
>
> Key: HBASE-8306
> URL: https://issues.apache.org/jira/browse/HBASE-8306
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 8306-v1.txt, 8306-v4.txt
>
>
> Currently TestJoinedScanners uses fixed lengths of values for essential and 
> non-essential column families.
> The selection rate of SingleColumnValueFilter is fixed and distribution of 
> selected rows forms stripes.
> TestJoinedScanners can be enhanced in the following ways:
> 1. main() can be introduced so that the test can be run standalone
> 2. selection ratio can be specified by user
> 3. distribution of selected rows should be random
> 4. user should be able to specify data block encoding for the column families

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8313) Add Bloom filter testing for HFileOutputFormat

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629775#comment-13629775
 ] 

Hudson commented on HBASE-8313:
---

Integrated in HBase-0.94-security #134 (See 
[https://builds.apache.org/job/HBase-0.94-security/134/])
HBASE-8313 Add Bloom filter testing for HFileOutputFormat (Revision 1466418)

 Result = FAILURE
mbertozzi : 
Files : 
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java


> Add Bloom filter testing for HFileOutputFormat
> --
>
> Key: HBASE-8313
> URL: https://issues.apache.org/jira/browse/HBASE-8313
> Project: HBase
>  Issue Type: Bug
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 0.94.7, 0.95.1
>
> Attachments: HBASE-8313-94.patch, HBASE-8313-v0.patch
>
>
> HBASE-3776 added Bloom Filter Support to the HFileOutputFormat, but there's 
> no test to verify that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7507) Make memstore flush be able to retry after exception

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629774#comment-13629774
 ] 

Hudson commented on HBASE-7507:
---

Integrated in HBase-0.94-security #134 (See 
[https://builds.apache.org/job/HBase-0.94-security/134/])
HBASE-7929 Reapply hbase-7507 'Make memstore flush be able to retry after 
exception' to 0.94 branch. (Original patch by chunhui shen) (Revision 1467121)

 Result = FAILURE

> Make memstore flush be able to retry after exception
> 
>
> Key: HBASE-7507
> URL: https://issues.apache.org/jira/browse/HBASE-7507
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.6, 0.95.0
>Reporter: chunhui shen
>Assignee: chunhui shen
>Priority: Critical
> Fix For: 0.94.6, 0.95.0
>
> Attachments: 7507-94.patch, 7507-trunk v1.patch, 7507-trunk v2.patch, 
> 7507-trunkv3.patch
>
>
> We will abort regionserver if memstore flush throws exception.
> I thinks we could do retry to make regionserver more stable because file 
> system may be not ok in a transient time. e.g. Switching namenode in the 
> NamenodeHA environment
> {code}
> HRegion#internalFlushcache(){
> ...
> try {
> ...
> }catch(Throwable t){
> DroppedSnapshotException dse = new DroppedSnapshotException("region: " +
>   Bytes.toStringBinary(getRegionName()));
> dse.initCause(t);
> throw dse;
> }
> ...
> }
> MemStoreFlusher#flushRegion(){
> ...
> region.flushcache();
> ...
>  try {
> }catch(DroppedSnapshotException ex){
> server.abort("Replay of HLog required. Forcing server shutdown", ex);
> }
> ...
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7929) Reapply hbase-7507 "Make memstore flush be able to retry after exception" to 0.94 branch.

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629773#comment-13629773
 ] 

Hudson commented on HBASE-7929:
---

Integrated in HBase-0.94-security #134 (See 
[https://builds.apache.org/job/HBase-0.94-security/134/])
HBASE-7929 Reapply hbase-7507 'Make memstore flush be able to retry after 
exception' to 0.94 branch. (Original patch by chunhui shen) (Revision 1467121)

 Result = FAILURE
larsh : 
Files : 
* /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/HConstants.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java


> Reapply hbase-7507 "Make memstore flush be able to retry after exception" to 
> 0.94 branch.
> -
>
> Key: HBASE-7929
> URL: https://issues.apache.org/jira/browse/HBASE-7929
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: stack
> Fix For: 0.94.7
>
>
> It was applied once then backed out because it seemed like it could be part 
> responsible for destabilizing unit tests.  Thinking is different now.  
> Retrying application.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7658) grant with an empty string as permission should throw an exception

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629772#comment-13629772
 ] 

Hudson commented on HBASE-7658:
---

Integrated in HBase-0.94-security #134 (See 
[https://builds.apache.org/job/HBase-0.94-security/134/])
HBASE-7658 grant with an empty string as permission should throw an 
exception (addendum) (Revision 1466826)
HBASE-7658 grant with an empty string as permission should throw an exception 
(Revision 1466723)

 Result = FAILURE
mbertozzi : 
Files : 
* /hbase/branches/0.94/src/main/ruby/hbase/security.rb

mbertozzi : 
Files : 
* 
/hbase/branches/0.94/security/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlLists.java
* /hbase/branches/0.94/src/main/ruby/hbase/security.rb
* /hbase/branches/0.94/src/main/ruby/shell/commands/grant.rb
* /hbase/branches/0.94/src/main/ruby/shell/commands/revoke.rb


> grant with an empty string as permission should throw an exception
> --
>
> Key: HBASE-7658
> URL: https://issues.apache.org/jira/browse/HBASE-7658
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.95.2
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Trivial
> Fix For: 0.94.7, 0.95.1
>
> Attachments: HBASE-7658-0.94.patch, HBASE-7658-v0.patch, 
> HBASE-7658-v1.patch
>
>
> If someone specify an empty permission
> {code}grant 'user', ''{code}
> AccessControlLists.addUserPermission() output a log message and doesn't 
> change the permission, but the user doesn't know about it.
> {code}
> if ((actions == null) || (actions.length == 0)) {
>   LOG.warn("No actions associated with user 
> '"+Bytes.toString(userPerm.getUser())+"'");
>   return;
> }
> {code}
> I think we should throw an exception instead of just logging.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7824) Improve master start up time when there is log splitting work

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629770#comment-13629770
 ] 

Hudson commented on HBASE-7824:
---

Integrated in HBase-0.94-security #134 (See 
[https://builds.apache.org/job/HBase-0.94-security/134/])
HBASE-7824 Improve master start up time when there is log splitting work 
(Jeffrey Zhong) (Revision 1466725)

 Result = FAILURE
tedyu : 
Files : 
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/catalog/CatalogTracker.java
* /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/master/handler/MetaServerShutdownHandler.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/master/handler/ServerShutdownHandler.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperNodeTracker.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/MiniHBaseCluster.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailover.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSKilledWhenMasterInitializing.java


> Improve master start up time when there is log splitting work
> -
>
> Key: HBASE-7824
> URL: https://issues.apache.org/jira/browse/HBASE-7824
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Fix For: 0.94.7
>
> Attachments: hbase-7824.patch, hbase-7824-v10.patch, 
> hbase-7824_v2.patch, hbase-7824_v3.patch, hbase-7824-v7.patch, 
> hbase-7824-v8.patch, hbase-7824-v9.patch
>
>
> When there is log split work going on, master start up waits till all log 
> split work completes even though the log split has nothing to do with meta 
> region servers.
> It's a bad behavior considering a master node can run when log split is 
> happening while its start up is blocking by log split work. 
> Since master is kind of single point of failure, we should start it ASAP.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8316) JoinedHeap for non essential column families should reseek instead of seek

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629771#comment-13629771
 ] 

Hudson commented on HBASE-8316:
---

Integrated in HBase-0.94-security #134 (See 
[https://builds.apache.org/job/HBase-0.94-security/134/])
HBASE-8316 JoinedHeap for non essential column families should reseek 
instead of seek (Revision 1466708)

 Result = FAILURE
larsh : 
Files : 
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> JoinedHeap for non essential column families should reseek instead of seek
> --
>
> Key: HBASE-8316
> URL: https://issues.apache.org/jira/browse/HBASE-8316
> Project: HBase
>  Issue Type: Sub-task
>  Components: Filters, Performance, regionserver
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.98.0, 0.94.7, 0.95.1
>
> Attachments: 8316-0.94.txt, 8316-trunk.txt, 8316-trunk.txt, 
> FDencode.png, noencode.png
>
>
> This was raised by the Phoenix team. During a profiling session we noticed 
> that catching the joinedHeap up to the current rows via seek causes a 
> performance regression, which makes the joinedHeap only efficient when either 
> a high or low percentage is matched by the filter.
> (High is fine, because the joinedHeap will not get behind as often and does 
> not need to be caught up, low is fine, because the seek isn't happening 
> frequently).
> In our tests we found that the solution is quite simple: Replace seek with 
> reseek. Patch coming soon.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8266) Master cannot start if TableNotFoundException is thrown while partial table recovery

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629769#comment-13629769
 ] 

Hudson commented on HBASE-8266:
---

Integrated in HBase-0.94-security #134 (See 
[https://builds.apache.org/job/HBase-0.94-security/134/])
HBASE-8266-Master cannot start if TableNotFoundException is thrown while 
partial table recovery (Ram) (Revision 1466567)

 Result = FAILURE
ramkrishna : 
Files : 
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/master/handler/CreateTableHandler.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/master/handler/EnableTableHandler.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKTable.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/master/handler/TestCreateTableHandler.java


> Master cannot start if TableNotFoundException is thrown while partial table 
> recovery
> 
>
> Key: HBASE-8266
> URL: https://issues.apache.org/jira/browse/HBASE-8266
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.6, 0.95.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 0.98.0, 0.94.7, 0.95.1
>
> Attachments: HBASE-8266_0.94.patch, HBASE-8266_1.patch, 
> HBASE-8266.patch
>
>
> I was trying to create a table. The table creation failed
> {code}
> java.io.IOException: java.util.concurrent.ExecutionException: 
> java.lang.IllegalStateException: Could not instantiate a region instance.
>   at 
> org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegions(ModifyRegionUtils.java:133)
>   at 
> org.apache.hadoop.hbase.master.handler.CreateTableHandler.handleCreateHdfsRegions(CreateTableHandler.java:256)
>   at 
> org.apache.hadoop.hbase.master.handler.CreateTableHandler.handleCreateTable(CreateTableHandler.java:204)
>   at 
> org.apache.hadoop.hbase.master.handler.CreateTableHandler.process(CreateTableHandler.java:153)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:130)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:662)
> Caused by: java.util.concurrent.ExecutionException: 
> java.lang.IllegalStateException: Could not instantiate a region instance.
>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>   at 
> org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegions(ModifyRegionUtils.java:126)
>   ... 7 more
> Caused by: java.lang.IllegalStateException: Could not instantiate a region 
> instance.
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3765)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:3870)
>   at 
> org.apache.hadoop.hbase.util.ModifyRegionUtils$1.call(ModifyRegionUtils.java:106)
>   at 
> org.apache.hadoop.hbase.util.ModifyRegionUtils$1.call(ModifyRegionUtils.java:103)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>   ... 3 more
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3762)
>   ... 11 more
> Caused by: java.lang.NoClassDefFoundError: 
> org/apache/hadoop/hbase/CompoundConfiguration$1
>   at 
> org.apache.hadoop.hbase.CompoundConfiguration.add(CompoundConfiguration.java:82)
>   at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:438)
>   at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:401)
>   ... 16 more
> {code}
> Am not sure of the above failure.  The same setup is able to create new 
> tables.
> Now the table is already in ENABLING state.  The master was restarted.
> Now as the table was found in ENABLING state but not added to META the 
> EnableTableHandler 
> {code}
> 20

[jira] [Commented] (HBASE-8303) Increse the test timeout to 60s when they are less than 20s

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629768#comment-13629768
 ] 

Hudson commented on HBASE-8303:
---

Integrated in HBase-0.94-security #134 (See 
[https://builds.apache.org/job/HBase-0.94-security/134/])
HBASE-8303. Increase the test timeout to 60s when they are less than 20s 
(Revision 1467157)

 Result = FAILURE
apurtell : 
Files : 
* /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/TestZooKeeper.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotFromAdmin.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotsFromAdmin.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/constraint/TestConstraint.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/errorhandling/TestTimeoutExceptionInjector.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/ipc/TestPBOnWritableRpc.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManager.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestSnapshotFromMaster.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/master/handler/TestCreateTableHandler.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/procedure/TestProcedure.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/procedure/TestProcedureCoordinator.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/procedure/TestProcedureMember.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/procedure/TestZKProcedureControllers.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/snapshot/TestFlushSnapshotFromClient.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/thrift/TestCallQueue.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/util/TestThreads.java


> Increse the test timeout to 60s when they are less than 20s
> ---
>
> Key: HBASE-8303
> URL: https://issues.apache.org/jira/browse/HBASE-8303
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.7, 0.95.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.94.7, 0.95.1
>
> Attachments: 8303-0.94.patch, 8303.v1.patch, 8303.v1.patch
>
>
> Short test timeouts are dangerous because:
>  - if the test is executed in the same jvm as another, GC, thread priority 
> can play a role
>  - we don't know the machine used to execute the tests, nor what's running on 
> it;.
> For this reason, a test timeout of 60s allows us to be on the safe side.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8317) Seek returns wrong result with PREFIX_TREE Encoding

2013-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629765#comment-13629765
 ] 

Hadoop QA commented on HBASE-8317:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12578343/hbase-trunk-8317v3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 7 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.security.access.TestAccessController

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5280//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5280//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5280//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5280//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5280//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5280//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5280//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5280//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5280//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5280//console

This message is automatically generated.

> Seek returns wrong result with PREFIX_TREE Encoding
> ---
>
> Key: HBASE-8317
> URL: https://issues.apache.org/jira/browse/HBASE-8317
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.0
>Reporter: chunhui shen
>Assignee: chunhui shen
> Attachments: HBASE-8317-v1.patch, hbase-trunk-8317.patch, 
> hbase-trunk-8317v3.patch
>
>
> TestPrefixTreeEncoding#testSeekWithFixedData from the patch could reproduce 
> the bug.
> An example of the bug case:
> Suppose the following rows:
> 1.row3/c1:q1/
> 2.row3/c1:q2/
> 3.row3/c1:q3/
> 4.row4/c1:q1/
> 5.row4/c1:q2/
> After seeking the row 'row30', the expected peek KV is row4/c1:q1/, but 
> actual is row3/c1:q1/.
> I just fix this bug case in the patch, 
> Maybe we can do more for other potential problems if anyone is familiar with 
> the code of PREFIX_TREE

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8325) ReplicationSource read a empty HLog throws EOFException

2013-04-11 Thread Liang Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629759#comment-13629759
 ] 

Liang Xie commented on HBASE-8325:
--

Hi [~zavakid], the current status of HBASE-7122 is still Patch Available, we 
can backport it to 0.94 once it's resolved. Hope it's helpful for you:)

> ReplicationSource read a empty HLog throws EOFException
> ---
>
> Key: HBASE-8325
> URL: https://issues.apache.org/jira/browse/HBASE-8325
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.94.5
> Environment: replication enabled
>Reporter: zavakid
>Priority: Critical
>
> I'm using  the replication of Hbase in my test environment.
> When a replicationSource open a empty HLog, the EOFException throws. 
> It is because the Reader can't read the SequenceFile's meta data, but there's 
> no data at all, so it throws the EOFException.
> Should we detect the empty file and processed it, like we process the 
> FileNotFoundException?
> here's the code:
> {code:java}
> /**
>* Open a reader on the current path
>*
>* @param sleepMultiplier by how many times the default sleeping time is 
> augmented
>* @return true if we should continue with that file, false if we are over 
> with it
>*/
>   protected boolean openReader(int sleepMultiplier) {
> try {
>   LOG.debug("Opening log for replication " + this.currentPath.getName() +
>   " at " + this.repLogReader.getPosition());
>   try {
> this.reader = repLogReader.openReader(this.currentPath);
>   } catch (FileNotFoundException fnfe) {
> if (this.queueRecovered) {
>   // We didn't find the log in the archive directory, look if it still
>   // exists in the dead RS folder (there could be a chain of failures
>   // to look at)
>   LOG.info("NB dead servers : " + deadRegionServers.length);
>   for (int i = this.deadRegionServers.length - 1; i >= 0; i--) {
> Path deadRsDirectory =
> new Path(manager.getLogDir().getParent(), 
> this.deadRegionServers[i]);
> Path[] locs = new Path[] {
> new Path(deadRsDirectory, currentPath.getName()),
> new Path(deadRsDirectory.suffix(HLog.SPLITTING_EXT),
>   currentPath.getName()),
> };
> for (Path possibleLogLocation : locs) {
>   LOG.info("Possible location " + 
> possibleLogLocation.toUri().toString());
>   if (this.manager.getFs().exists(possibleLogLocation)) {
> // We found the right new location
> LOG.info("Log " + this.currentPath + " still exists at " +
> possibleLogLocation);
> // Breaking here will make us sleep since reader is null
> return true;
>   }
> }
>   }
>   // TODO What happens if the log was missing from every single 
> location?
>   // Although we need to check a couple of times as the log could have
>   // been moved by the master between the checks
>   // It can also happen if a recovered queue wasn't properly cleaned,
>   // such that the znode pointing to a log exists but the log was
>   // deleted a long time ago.
>   // For the moment, we'll throw the IO and processEndOfFile
>   throw new IOException("File from recovered queue is " +
>   "nowhere to be found", fnfe);
> } else {
>   // If the log was archived, continue reading from there
>   Path archivedLogLocation =
>   new Path(manager.getOldLogDir(), currentPath.getName());
>   if (this.manager.getFs().exists(archivedLogLocation)) {
> currentPath = archivedLogLocation;
> LOG.info("Log " + this.currentPath + " was moved to " +
> archivedLogLocation);
> // Open the log at the new location
> this.openReader(sleepMultiplier);
>   }
>   // TODO What happens the log is missing in both places?
> }
>   }
> } catch (IOException ioe) {
>   LOG.warn(peerClusterZnode + " Got: ", ioe);
>   this.reader = null;
>   // TODO Need a better way to determinate if a file is really gone but
>   // TODO without scanning all logs dir
>   if (sleepMultiplier == this.maxRetriesMultiplier) {
> LOG.warn("Waited too long for this file, considering dumping");
> return !processEndOfFile();
>   }
> }
> return true;
>   }
> {code}
> there's a method called {code:java}processEndOfFile(){code}
> should we add this case in it?

--
This message is automatically generated by JIRA.
If you th

[jira] [Commented] (HBASE-8218) pass HTable as a parameter to method of AggregationClient

2013-04-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629754#comment-13629754
 ] 

Ted Yu commented on HBASE-8218:
---

{code}
+  public void close() throws IOException {
+this.threadPool.shutdown();
{code}
Can you introduce a flag so that we know whether AggregationClient has been 
closed ?
Do we need to wait for the shutdown to finish before returning from close() ?

> pass HTable as a parameter to method of AggregationClient
> -
>
> Key: HBASE-8218
> URL: https://issues.apache.org/jira/browse/HBASE-8218
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Coprocessors
>Affects Versions: 0.94.3
>Reporter: cuijianwei
> Attachments: HBASE-8218-0.94.3-v1.txt
>
>
> In AggregationClient, methods such as max(...), min(...) pass 'tableName' as 
> a parameter, then a HTable will be created in the method, before the method 
> return, the created HTable will be closed.
> The process above may be heavy because each call must create and close a 
> HTable. The situation becomes worse when there is only one thread access 
> HBase using AggregationClient. The underly HConnection of created HTable will 
> also be created and then closed each time when we invoke these method because 
> no other HTables using the HConnection. This operation is heavy. Therefore, 
> can we add another group of methods which pass HTable or HTablePool as a 
> parameter to methods defined in AggregationClient? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8218) pass HTable as a parameter to method of AggregationClient

2013-04-11 Thread cuijianwei (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629748#comment-13629748
 ] 

cuijianwei commented on HBASE-8218:
---

I add a patch which using HConnection and ThreadPool to create HTable in 
AggregationClient. I'am not sure whether it's a reasonable way to alleviate the 
problem. Consequently, I only use HConnection and ThreadPool to create HTable 
in method {code} public  R max(final byte[] tableName, final 
ColumnInterpreter ci, final Scan scan) throws Throwable {code} 。I run 
TestAggregationClient.java locally and it passes. I can provide a more 
completed patch after understand your feedback. 

> pass HTable as a parameter to method of AggregationClient
> -
>
> Key: HBASE-8218
> URL: https://issues.apache.org/jira/browse/HBASE-8218
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Coprocessors
>Affects Versions: 0.94.3
>Reporter: cuijianwei
> Attachments: HBASE-8218-0.94.3-v1.txt
>
>
> In AggregationClient, methods such as max(...), min(...) pass 'tableName' as 
> a parameter, then a HTable will be created in the method, before the method 
> return, the created HTable will be closed.
> The process above may be heavy because each call must create and close a 
> HTable. The situation becomes worse when there is only one thread access 
> HBase using AggregationClient. The underly HConnection of created HTable will 
> also be created and then closed each time when we invoke these method because 
> no other HTables using the HConnection. This operation is heavy. Therefore, 
> can we add another group of methods which pass HTable or HTablePool as a 
> parameter to methods defined in AggregationClient? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8218) pass HTable as a parameter to method of AggregationClient

2013-04-11 Thread cuijianwei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

cuijianwei updated HBASE-8218:
--

Attachment: HBASE-8218-0.94.3-v1.txt

using HConnection and ThreadPool in AggregationClient to create HTable

> pass HTable as a parameter to method of AggregationClient
> -
>
> Key: HBASE-8218
> URL: https://issues.apache.org/jira/browse/HBASE-8218
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Coprocessors
>Affects Versions: 0.94.3
>Reporter: cuijianwei
> Attachments: HBASE-8218-0.94.3-v1.txt
>
>
> In AggregationClient, methods such as max(...), min(...) pass 'tableName' as 
> a parameter, then a HTable will be created in the method, before the method 
> return, the created HTable will be closed.
> The process above may be heavy because each call must create and close a 
> HTable. The situation becomes worse when there is only one thread access 
> HBase using AggregationClient. The underly HConnection of created HTable will 
> also be created and then closed each time when we invoke these method because 
> no other HTables using the HConnection. This operation is heavy. Therefore, 
> can we add another group of methods which pass HTable or HTablePool as a 
> parameter to methods defined in AggregationClient? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7255) KV size metric went missing from StoreScanner.

2013-04-11 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-7255:
-

Attachment: HBASE-7255-4.patch

So I had to move the key size computation into nextRaw as some scans (ones that 
aren't meta) were missing metrics.

So I added a test to make sure this isn't missed again.

> KV size metric went missing from StoreScanner.
> --
>
> Key: HBASE-7255
> URL: https://issues.apache.org/jira/browse/HBASE-7255
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.95.1
>
> Attachments: HBASE-7255-0.patch, HBASE-7255-1.patch, 
> HBASE-7255-2.patch, HBASE-7255-3.patch, HBASE-7255-4.patch
>
>
> In trunk due to the metric refactor, at least the KV size metric went missing.
> See this code in StoreScanner.java:
> {code}
> } finally {
>   if (cumulativeMetric > 0 && metric != null) {
>   }
> }
> {code}
> Just an empty if statement, where the metric used to be collected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8329) Limit compaction speed

2013-04-11 Thread binlijin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated HBASE-8329:


Component/s: Compaction
 Issue Type: Improvement  (was: Bug)

> Limit compaction speed
> --
>
> Key: HBASE-8329
> URL: https://issues.apache.org/jira/browse/HBASE-8329
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: binlijin
>
> There is no speed or resource limit for compaction,I think we should add this 
> feature especially when request burst.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6330) TestImportExport has been failing against hadoop 0.23/2.0 profile

2013-04-11 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629734#comment-13629734
 ] 

Jonathan Hsieh commented on HBASE-6330:
---

when the rest of the consistently broken fixes for Hadoop2 patches get in
I'll hunt down the flaky ones.   the speculative execution fix may be
related to this as well.




-- 
// Jonathan Hsieh (shay)
// Software Engineer, Cloudera
// j...@cloudera.com


> TestImportExport has been failing against hadoop 0.23/2.0 profile
> -
>
> Key: HBASE-6330
> URL: https://issues.apache.org/jira/browse/HBASE-6330
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 0.94.1, 0.95.2
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>  Labels: hadoop-2.0
> Fix For: 0.98.0, 0.95.1
>
> Attachments: hbase-6330-94.patch, hbase-6330-trunk.patch, 
> hbase-6330-v2.patch, hbase-6330.v4.patch
>
>
> See HBASE-5876.  I'm going to commit the v3 patches under this name since 
> there has been two months (my bad) since the first half was committed and 
> found to be incomplte.
> ---
> 4/9/13 Updated - this will take the patch from HBASE-8258 to fix this 
> specific problem.  The umbrella that used to be HBASE-8258 is now handled 
> with HBASE-6891.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8329) Limit compaction speed

2013-04-11 Thread Liang Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629733#comment-13629733
 ] 

Liang Xie commented on HBASE-8329:
--

Maybe it's an "improvement", not a "Bug", [~aoxiang], right ?
In our internal version, we are designing a compaction load awareness feature,  
which is similar with your idea.
But if you just want to throttle, maybe it's easier seem to me:)

+1 on your idea, it's pretty important for production operation:)   btw, i made 
a compaction shell switch in HBASE-7875, it's somehow helpful, though looking 
stupid, needs some guys to fire it manually:)

> Limit compaction speed
> --
>
> Key: HBASE-8329
> URL: https://issues.apache.org/jira/browse/HBASE-8329
> Project: HBase
>  Issue Type: Bug
>Reporter: binlijin
>
> There is no speed or resource limit for compaction,I think we should add this 
> feature especially when request burst.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HBASE-7704) migration tool that checks presence of HFile V1 files

2013-04-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629720#comment-13629720
 ] 

Ted Yu edited comment on HBASE-7704 at 4/12/13 3:12 AM:


+1 on patch v5

  was (Author: yuzhih...@gmail.com):
Patch v4 allows specification of data block encoding for the column 
families in IntegrationTestLazyCfLoading
  
> migration tool that checks presence of HFile V1 files
> -
>
> Key: HBASE-7704
> URL: https://issues.apache.org/jira/browse/HBASE-7704
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Himanshu Vashishtha
>Priority: Blocker
> Fix For: 0.95.1
>
> Attachments: HBase-7704-v1.patch, HBase-7704-v2.patch, 
> HBase-7704-v3.patch, HBase-7704-v4.patch, HBASE-7704-v5.patch
>
>
> Below was Stack's comment from HBASE-7660:
> Regards the migration 'tool', or 'tool' to check for presence of v1 files, I 
> imagine it as an addition to the hfile tool 
> http://hbase.apache.org/book.html#hfile_tool2 The hfile tool already takes a 
> bunch of args including printing out meta. We could add an option to print 
> out version only – or return 1 if version 1 or some such – and then do a bit 
> of code to just list all hfiles and run this script against each. Could MR it 
> if too many files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8306) Enhance TestJoinedScanners with ability to simulate more scenarios

2013-04-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-8306:
--

Attachment: 8306-v4.txt

Patch v4 allows specification of data block encoding for the column families in 
IntegrationTestLazyCfLoading

> Enhance TestJoinedScanners with ability to simulate more scenarios
> --
>
> Key: HBASE-8306
> URL: https://issues.apache.org/jira/browse/HBASE-8306
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 8306-v1.txt, 8306-v4.txt
>
>
> Currently TestJoinedScanners uses fixed lengths of values for essential and 
> non-essential column families.
> The selection rate of SingleColumnValueFilter is fixed and distribution of 
> selected rows forms stripes.
> TestJoinedScanners can be enhanced in the following ways:
> 1. main() can be introduced so that the test can be run standalone
> 2. selection ratio can be specified by user
> 3. distribution of selected rows should be random
> 4. user should be able to specify data block encoding for the column families

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8220) can we record the count opened HTable for HTablePool

2013-04-11 Thread cuijianwei (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629725#comment-13629725
 ] 

cuijianwei commented on HBASE-8220:
---

[~Jean-Marc Spaggiari], thanks for your concern, I add corresponding unit tests 
for 'ConcurrentUsedTable' and the unit tests passed locally, could you please 
apply the patch 'HBASE-8220-0.94.3-v5.txt' and test it on your side?

> can we record the count opened HTable for HTablePool
> 
>
> Key: HBASE-8220
> URL: https://issues.apache.org/jira/browse/HBASE-8220
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 0.94.3
>Reporter: cuijianwei
> Attachments: HBASE-8220-0.94.3.txt, HBASE-8220-0.94.3.txt, 
> HBASE-8220-0.94.3.txt-v2, HBASE-8220-0.94.3-v2.txt, HBASE-8220-0.94.3-v3.txt, 
> HBASE-8220-0.94.3-v4.txt, HBASE-8220-0.94.3-v5.txt
>
>
> In HTablePool, we have a method getCurrentPoolSize(...) to get how many 
> opened HTable has been pooled. However, we don't know ConcurrentOpenedHTable 
> which means the count of HTable get from HTablePool.getTable(...) and don't 
> return to HTablePool by PooledTable.close(). The ConcurrentOpenedHTable may 
> be meaningful because it indicates how many HTables should be opened for the 
> application which may help us set the appropriate MaxSize of HTablePool. 
> Therefore, we can and a ConcurrentOpenedHTable as a counter in HTablePool.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7704) migration tool that checks presence of HFile V1 files

2013-04-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-7704:
--

Attachment: (was: 8306-v4.txt)

> migration tool that checks presence of HFile V1 files
> -
>
> Key: HBASE-7704
> URL: https://issues.apache.org/jira/browse/HBASE-7704
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Himanshu Vashishtha
>Priority: Blocker
> Fix For: 0.95.1
>
> Attachments: HBase-7704-v1.patch, HBase-7704-v2.patch, 
> HBase-7704-v3.patch, HBase-7704-v4.patch, HBASE-7704-v5.patch
>
>
> Below was Stack's comment from HBASE-7660:
> Regards the migration 'tool', or 'tool' to check for presence of v1 files, I 
> imagine it as an addition to the hfile tool 
> http://hbase.apache.org/book.html#hfile_tool2 The hfile tool already takes a 
> bunch of args including printing out meta. We could add an option to print 
> out version only – or return 1 if version 1 or some such – and then do a bit 
> of code to just list all hfiles and run this script against each. Could MR it 
> if too many files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7437) Improve CompactSelection

2013-04-11 Thread Hiroshi Ikeda (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629721#comment-13629721
 ] 

Hiroshi Ikeda commented on HBASE-7437:
--

Since Calendar initializes its fields lazily in the implementation, it might be 
*not* safe to get the hour without synchronization even if we never update its 
internal time.

> Improve CompactSelection
> 
>
> Key: HBASE-7437
> URL: https://issues.apache.org/jira/browse/HBASE-7437
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Hiroshi Ikeda
>Assignee: Hiroshi Ikeda
>Priority: Minor
> Attachments: HBASE-7437.patch, HBASE-7437-V2.patch, 
> HBASE-7437-V3.patch, HBASE-7437-V4.patch
>
>
> 1. Using AtomicLong makes CompactSelection simple and improve its performance.
> 2. There are unused fields and methods.
> 3. The fields should be private.
> 4. Assertion in the method finishRequest seems wrong:
> {code}
>   public void finishRequest() {
> if (isOffPeakCompaction) {
>   long newValueToLog = -1;
>   synchronized(compactionCountLock) {
> assert !isOffPeakCompaction : "Double-counting off-peak count for 
> compaction";
> {code}
> The above assertion seems almost always false.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8220) can we record the count opened HTable for HTablePool

2013-04-11 Thread cuijianwei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

cuijianwei updated HBASE-8220:
--

Attachment: HBASE-8220-0.94.3-v5.txt

add unit test for concurrentUsedTable

> can we record the count opened HTable for HTablePool
> 
>
> Key: HBASE-8220
> URL: https://issues.apache.org/jira/browse/HBASE-8220
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 0.94.3
>Reporter: cuijianwei
> Attachments: HBASE-8220-0.94.3.txt, HBASE-8220-0.94.3.txt, 
> HBASE-8220-0.94.3.txt-v2, HBASE-8220-0.94.3-v2.txt, HBASE-8220-0.94.3-v3.txt, 
> HBASE-8220-0.94.3-v4.txt, HBASE-8220-0.94.3-v5.txt
>
>
> In HTablePool, we have a method getCurrentPoolSize(...) to get how many 
> opened HTable has been pooled. However, we don't know ConcurrentOpenedHTable 
> which means the count of HTable get from HTablePool.getTable(...) and don't 
> return to HTablePool by PooledTable.close(). The ConcurrentOpenedHTable may 
> be meaningful because it indicates how many HTables should be opened for the 
> application which may help us set the appropriate MaxSize of HTablePool. 
> Therefore, we can and a ConcurrentOpenedHTable as a counter in HTablePool.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7704) migration tool that checks presence of HFile V1 files

2013-04-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-7704:
--

Attachment: 8306-v4.txt

Patch v4 allows specification of data block encoding for the column families in 
IntegrationTestLazyCfLoading

> migration tool that checks presence of HFile V1 files
> -
>
> Key: HBASE-7704
> URL: https://issues.apache.org/jira/browse/HBASE-7704
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Himanshu Vashishtha
>Priority: Blocker
> Fix For: 0.95.1
>
> Attachments: 8306-v4.txt, HBase-7704-v1.patch, HBase-7704-v2.patch, 
> HBase-7704-v3.patch, HBase-7704-v4.patch, HBASE-7704-v5.patch
>
>
> Below was Stack's comment from HBASE-7660:
> Regards the migration 'tool', or 'tool' to check for presence of v1 files, I 
> imagine it as an addition to the hfile tool 
> http://hbase.apache.org/book.html#hfile_tool2 The hfile tool already takes a 
> bunch of args including printing out meta. We could add an option to print 
> out version only – or return 1 if version 1 or some such – and then do a bit 
> of code to just list all hfiles and run this script against each. Could MR it 
> if too many files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8317) Seek returns wrong result with PREFIX_TREE Encoding

2013-04-11 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-8317:


Attachment: hbase-trunk-8317v3.patch

Merging tests in patch v3

> Seek returns wrong result with PREFIX_TREE Encoding
> ---
>
> Key: HBASE-8317
> URL: https://issues.apache.org/jira/browse/HBASE-8317
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.0
>Reporter: chunhui shen
>Assignee: chunhui shen
> Attachments: HBASE-8317-v1.patch, hbase-trunk-8317.patch, 
> hbase-trunk-8317v3.patch
>
>
> TestPrefixTreeEncoding#testSeekWithFixedData from the patch could reproduce 
> the bug.
> An example of the bug case:
> Suppose the following rows:
> 1.row3/c1:q1/
> 2.row3/c1:q2/
> 3.row3/c1:q3/
> 4.row4/c1:q1/
> 5.row4/c1:q2/
> After seeking the row 'row30', the expected peek KV is row4/c1:q1/, but 
> actual is row3/c1:q1/.
> I just fix this bug case in the patch, 
> Maybe we can do more for other potential problems if anyone is familiar with 
> the code of PREFIX_TREE

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-8329) Limit compaction speed

2013-04-11 Thread binlijin (JIRA)
binlijin created HBASE-8329:
---

 Summary: Limit compaction speed
 Key: HBASE-8329
 URL: https://issues.apache.org/jira/browse/HBASE-8329
 Project: HBase
  Issue Type: Bug
Reporter: binlijin


There is no speed or resource limit for compaction,I think we should add this 
feature especially when request burst.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7605) TestMiniClusterLoadSequential fails in trunk build on hadoop 2

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629710#comment-13629710
 ] 

Hudson commented on HBASE-7605:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #494 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/494/])
HBASE-7605 TestMiniClusterLoadSequential fails in trunk build on hadoop2 
(Revision 1467135)

 Result = FAILURE
jmhsieh : 
Files : 
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadEncoded.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadSequential.java


> TestMiniClusterLoadSequential fails in trunk build on hadoop 2
> --
>
> Key: HBASE-7605
> URL: https://issues.apache.org/jira/browse/HBASE-7605
> Project: HBase
>  Issue Type: Sub-task
>  Components: hadoop2, test
>Reporter: Ted Yu
>Assignee: Jonathan Hsieh
>Priority: Critical
> Fix For: 0.98.0, 0.95.1
>
> Attachments: hbase-7605.patch
>
>
> From HBase-TRUNK-on-Hadoop-2.0.0 #354:
>   loadTest[0](org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential): 
> test timed out after 12 milliseconds
>   loadTest[1](org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential): 
> test timed out after 12 milliseconds
>   loadTest[2](org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential): 
> test timed out after 12 milliseconds
>   loadTest[3](org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential): 
> test timed out after 12 milliseconds

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-1936) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629709#comment-13629709
 ] 

Hudson commented on HBASE-1936:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #494 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/494/])
HBASE-1936 ClassLoader that loads from hdfs; useful adding filters to 
classpath without having to restart services (Revision 1467092)

 Result = FAILURE
jxiang : 
Files : 
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* 
/hbase/trunk/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestGet.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Base64.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java
* 
/hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestBase64.java
* 
/hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestDynamicClassLoader.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/Base64.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestBase64.java


> ClassLoader that loads from hdfs; useful adding filters to classpath without 
> having to restart services
> ---
>
> Key: HBASE-1936
> URL: https://issues.apache.org/jira/browse/HBASE-1936
> Project: HBase
>  Issue Type: New Feature
>Reporter: stack
>Assignee: Jimmy Xiang
>  Labels: noob
> Fix For: 0.98.0, 0.95.1
>
> Attachments: 0.94-1936.patch, cp_from_hdfs.patch, 
> HBASE-1936-trunk(forReview).patch, trunk-1936.patch, trunk-1936_v2.1.patch, 
> trunk-1936_v2.2.patch, trunk-1936_v2.patch, trunk-1936_v3.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8119) Optimize StochasticLoadBalancer

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629708#comment-13629708
 ] 

Hudson commented on HBASE-8119:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #494 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/494/])
HBASE-8119 Optimize StochasticLoadBalancer (Revision 1467109)

 Result = FAILURE
enis : 
Files : 
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/BalancerTestBase.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestStochasticLoadBalancer.java


> Optimize StochasticLoadBalancer
> ---
>
> Key: HBASE-8119
> URL: https://issues.apache.org/jira/browse/HBASE-8119
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 0.95.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Critical
> Fix For: 0.98.0, 0.95.1
>
> Attachments: hbase-8119_v2.patch, hbase-8119_v3.patch
>
>
> On a 5 node trunk cluster, I ran into a weird problem with 
> StochasticLoadBalancer:
> server1   Thu Mar 14 03:42:50 UTC 20130.0 33
> server2   Thu Mar 14 03:47:53 UTC 20130.0 34
> server3   Thu Mar 14 03:46:53 UTC 2013465.0   42
> server4   Thu Mar 14 03:47:53 UTC 201311455.0 282
> server5   Thu Mar 14 03:47:53 UTC 20130.0 34
> Total:5   11920   425
> Notice that server4 has 282 regions, while the others have much less. Plus 
> for one table with 260 regions has been super imbalanced:
> {code}
> Regions by Region Server
> Region Server Region Count
> http://server3:60030/ 10
> http://server4:60030/ 250
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7912) HBase Backup/Restore Based on HBase Snapshot and FileLink

2013-04-11 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629701#comment-13629701
 ] 

Yu Li commented on HBASE-7912:
--

@Matteo
One kind of late question for the comment "unless you need all the history put 
by put, you can take a snapshot every hour and the files not compacted are 
shared between the old one and the new one and maybe the table":

For disaster recovery, we need to export snapshot to another cluster if 
following the snapshot way, right? In this case, can files shared between local 
snapshots still be shared between the target-exported image? Or do we support 
"incrementally" export snapshot? IMHO, we need this "incremental" feature to 
reduce cross-cluster copy cost, for both export/backup and restore, what's your 
opinion?

> HBase Backup/Restore Based on HBase Snapshot and FileLink
> -
>
> Key: HBASE-7912
> URL: https://issues.apache.org/jira/browse/HBASE-7912
> Project: HBase
>  Issue Type: New Feature
>Reporter: Richard Ding
>Assignee: Richard Ding
>
> There have been attempts in the past to come up with a viable HBase 
> backup/restore solution (e.g., HBASE-4618).  Recently, there are many 
> advancements and new features in HBase, for example, FileLink, Snapshot, and 
> Distributed Barrier Procedure. This is a proposal for a backup/restore 
> solution that utilizes these new features to achieve better performance and 
> consistency. 
>  
> A common practice of backup and restore in database is to first take full 
> baseline backup, and then periodically take incremental backup that capture 
> the changes since the full baseline backup. HBase cluster can store massive 
> amount data.  Combination of full backups with incremental backups has 
> tremendous benefit for HBase as well.  The following is a typical scenario 
> for full and incremental backup.
> # The user takes a full backup of a table or a set of tables in HBase. 
> # The user schedules periodical incremental backups to capture the changes 
> from the full backup, or from last incremental backup.
> # The user needs to restore table data to a past point of time.
> # The full backup is restored to the table(s) or to different table name(s).  
> Then the incremental backups that are up to the desired point in time are 
> applied on top of the full backup. 
> We would support the following key features and capabilities.
> * Full backup uses HBase snapshot to capture HFiles.
> * Use HBase WALs to capture incremental changes, but we use bulk load of 
> HFiles for fast incremental restore.
> * Support single table or a set of tables, and column family level backup and 
> restore.
> * Restore to different table names.
> * Support adding additional tables or CF to backup set without interruption 
> of incremental backup schedule.
> * Support rollup/combining of incremental backups into longer period and 
> bigger incremental backups.
> * Unified command line interface for all the above.
> The solution will support HBase backup to FileSystem, either on the same 
> cluster or across clusters.  It has the flexibility to support backup to 
> other devices and servers in the future.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-8303) Increse the test timeout to 60s when they are less than 20s

2013-04-11 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-8303.
---

   Resolution: Fixed
Fix Version/s: 0.94.7

Committed to 0.94.

> Increse the test timeout to 60s when they are less than 20s
> ---
>
> Key: HBASE-8303
> URL: https://issues.apache.org/jira/browse/HBASE-8303
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.7, 0.95.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.94.7, 0.95.1
>
> Attachments: 8303-0.94.patch, 8303.v1.patch, 8303.v1.patch
>
>
> Short test timeouts are dangerous because:
>  - if the test is executed in the same jvm as another, GC, thread priority 
> can play a role
>  - we don't know the machine used to execute the tests, nor what's running on 
> it;.
> For this reason, a test timeout of 60s allows us to be on the safe side.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7437) Improve CompactSelection

2013-04-11 Thread Hiroshi Ikeda (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629697#comment-13629697
 ] 

Hiroshi Ikeda commented on HBASE-7437:
--

An instance of Calendar automatically update its internal time only just after 
its creation.

That means, without explicitly updating the internal time, a static field of 
Calendar always returns the same hour when the nesting class was loaded.

Moreover, since Calendar is not thread safe and the static field is intended to 
called by multiple threads, some synchronization is required while updating the 
internal time and getting the corresponding hour. (I think it is safe to get 
the hour without synchronization if we never update the time, even though it is 
useless.)

Contention between threads causes context switches, which are quite huge 
overhead that we should give priority to remove, and the most simple way to 
remove the worry is creating an instance for each call. But, as you mentioned, 
the implementation logic of GregorianCalendar is complex and it is possible 
that we cannot ignore both of overheads of creating an instance and updating 
its time.

For this reason, I created the independent class CurrentHourProvider in order 
to encapsulate its implementation details, and tried to reduce the above 
overheads without blocking threads. Of course it is overkill if the current 
hour is not so frequently required that it is acceptable to create an instance 
of Calendar for each call.



> Improve CompactSelection
> 
>
> Key: HBASE-7437
> URL: https://issues.apache.org/jira/browse/HBASE-7437
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Hiroshi Ikeda
>Assignee: Hiroshi Ikeda
>Priority: Minor
> Attachments: HBASE-7437.patch, HBASE-7437-V2.patch, 
> HBASE-7437-V3.patch, HBASE-7437-V4.patch
>
>
> 1. Using AtomicLong makes CompactSelection simple and improve its performance.
> 2. There are unused fields and methods.
> 3. The fields should be private.
> 4. Assertion in the method finishRequest seems wrong:
> {code}
>   public void finishRequest() {
> if (isOffPeakCompaction) {
>   long newValueToLog = -1;
>   synchronized(compactionCountLock) {
> assert !isOffPeakCompaction : "Double-counting off-peak count for 
> compaction";
> {code}
> The above assertion seems almost always false.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8325) ReplicationSource read a empty HLog throws EOFException

2013-04-11 Thread zavakid (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629685#comment-13629685
 ] 

zavakid commented on HBASE-8325:


It seems fixed in 0.95.1, and do we have any plan to patch it in the 0.94 ?

> ReplicationSource read a empty HLog throws EOFException
> ---
>
> Key: HBASE-8325
> URL: https://issues.apache.org/jira/browse/HBASE-8325
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.94.5
> Environment: replication enabled
>Reporter: zavakid
>Priority: Critical
>
> I'm using  the replication of Hbase in my test environment.
> When a replicationSource open a empty HLog, the EOFException throws. 
> It is because the Reader can't read the SequenceFile's meta data, but there's 
> no data at all, so it throws the EOFException.
> Should we detect the empty file and processed it, like we process the 
> FileNotFoundException?
> here's the code:
> {code:java}
> /**
>* Open a reader on the current path
>*
>* @param sleepMultiplier by how many times the default sleeping time is 
> augmented
>* @return true if we should continue with that file, false if we are over 
> with it
>*/
>   protected boolean openReader(int sleepMultiplier) {
> try {
>   LOG.debug("Opening log for replication " + this.currentPath.getName() +
>   " at " + this.repLogReader.getPosition());
>   try {
> this.reader = repLogReader.openReader(this.currentPath);
>   } catch (FileNotFoundException fnfe) {
> if (this.queueRecovered) {
>   // We didn't find the log in the archive directory, look if it still
>   // exists in the dead RS folder (there could be a chain of failures
>   // to look at)
>   LOG.info("NB dead servers : " + deadRegionServers.length);
>   for (int i = this.deadRegionServers.length - 1; i >= 0; i--) {
> Path deadRsDirectory =
> new Path(manager.getLogDir().getParent(), 
> this.deadRegionServers[i]);
> Path[] locs = new Path[] {
> new Path(deadRsDirectory, currentPath.getName()),
> new Path(deadRsDirectory.suffix(HLog.SPLITTING_EXT),
>   currentPath.getName()),
> };
> for (Path possibleLogLocation : locs) {
>   LOG.info("Possible location " + 
> possibleLogLocation.toUri().toString());
>   if (this.manager.getFs().exists(possibleLogLocation)) {
> // We found the right new location
> LOG.info("Log " + this.currentPath + " still exists at " +
> possibleLogLocation);
> // Breaking here will make us sleep since reader is null
> return true;
>   }
> }
>   }
>   // TODO What happens if the log was missing from every single 
> location?
>   // Although we need to check a couple of times as the log could have
>   // been moved by the master between the checks
>   // It can also happen if a recovered queue wasn't properly cleaned,
>   // such that the znode pointing to a log exists but the log was
>   // deleted a long time ago.
>   // For the moment, we'll throw the IO and processEndOfFile
>   throw new IOException("File from recovered queue is " +
>   "nowhere to be found", fnfe);
> } else {
>   // If the log was archived, continue reading from there
>   Path archivedLogLocation =
>   new Path(manager.getOldLogDir(), currentPath.getName());
>   if (this.manager.getFs().exists(archivedLogLocation)) {
> currentPath = archivedLogLocation;
> LOG.info("Log " + this.currentPath + " was moved to " +
> archivedLogLocation);
> // Open the log at the new location
> this.openReader(sleepMultiplier);
>   }
>   // TODO What happens the log is missing in both places?
> }
>   }
> } catch (IOException ioe) {
>   LOG.warn(peerClusterZnode + " Got: ", ioe);
>   this.reader = null;
>   // TODO Need a better way to determinate if a file is really gone but
>   // TODO without scanning all logs dir
>   if (sleepMultiplier == this.maxRetriesMultiplier) {
> LOG.warn("Waited too long for this file, considering dumping");
> return !processEndOfFile();
>   }
> }
> return true;
>   }
> {code}
> there's a method called {code:java}processEndOfFile(){code}
> should we add this case in it?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more 

[jira] [Commented] (HBASE-1936) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629669#comment-13629669
 ] 

Hudson commented on HBASE-1936:
---

Integrated in hbase-0.95 #141 (See 
[https://builds.apache.org/job/hbase-0.95/141/])
HBASE-1936 ClassLoader that loads from hdfs; useful adding filters to 
classpath without having to restart services (Revision 1467094)

 Result = SUCCESS
jxiang : 
Files : 
* 
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* 
/hbase/branches/0.95/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestGet.java
* 
/hbase/branches/0.95/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Base64.java
* 
/hbase/branches/0.95/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java
* 
/hbase/branches/0.95/hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestBase64.java
* 
/hbase/branches/0.95/hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestDynamicClassLoader.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/util/Base64.java
* 
/hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestBase64.java


> ClassLoader that loads from hdfs; useful adding filters to classpath without 
> having to restart services
> ---
>
> Key: HBASE-1936
> URL: https://issues.apache.org/jira/browse/HBASE-1936
> Project: HBase
>  Issue Type: New Feature
>Reporter: stack
>Assignee: Jimmy Xiang
>  Labels: noob
> Fix For: 0.98.0, 0.95.1
>
> Attachments: 0.94-1936.patch, cp_from_hdfs.patch, 
> HBASE-1936-trunk(forReview).patch, trunk-1936.patch, trunk-1936_v2.1.patch, 
> trunk-1936_v2.2.patch, trunk-1936_v2.patch, trunk-1936_v3.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7605) TestMiniClusterLoadSequential fails in trunk build on hadoop 2

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629670#comment-13629670
 ] 

Hudson commented on HBASE-7605:
---

Integrated in hbase-0.95 #141 (See 
[https://builds.apache.org/job/hbase-0.95/141/])
HBASE-7605 TestMiniClusterLoadSequential fails in trunk build on hadoop2 
(Revision 1467134)

 Result = SUCCESS
jmhsieh : 
Files : 
* 
/hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadEncoded.java
* 
/hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadSequential.java


> TestMiniClusterLoadSequential fails in trunk build on hadoop 2
> --
>
> Key: HBASE-7605
> URL: https://issues.apache.org/jira/browse/HBASE-7605
> Project: HBase
>  Issue Type: Sub-task
>  Components: hadoop2, test
>Reporter: Ted Yu
>Assignee: Jonathan Hsieh
>Priority: Critical
> Fix For: 0.98.0, 0.95.1
>
> Attachments: hbase-7605.patch
>
>
> From HBase-TRUNK-on-Hadoop-2.0.0 #354:
>   loadTest[0](org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential): 
> test timed out after 12 milliseconds
>   loadTest[1](org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential): 
> test timed out after 12 milliseconds
>   loadTest[2](org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential): 
> test timed out after 12 milliseconds
>   loadTest[3](org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential): 
> test timed out after 12 milliseconds

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8119) Optimize StochasticLoadBalancer

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629668#comment-13629668
 ] 

Hudson commented on HBASE-8119:
---

Integrated in hbase-0.95 #141 (See 
[https://builds.apache.org/job/hbase-0.95/141/])
HBASE-8119 Optimize StochasticLoadBalancer (Revision 1467111)

 Result = SUCCESS
enis : 
Files : 
* 
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java
* 
/hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/BalancerTestBase.java
* 
/hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestStochasticLoadBalancer.java


> Optimize StochasticLoadBalancer
> ---
>
> Key: HBASE-8119
> URL: https://issues.apache.org/jira/browse/HBASE-8119
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 0.95.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Critical
> Fix For: 0.98.0, 0.95.1
>
> Attachments: hbase-8119_v2.patch, hbase-8119_v3.patch
>
>
> On a 5 node trunk cluster, I ran into a weird problem with 
> StochasticLoadBalancer:
> server1   Thu Mar 14 03:42:50 UTC 20130.0 33
> server2   Thu Mar 14 03:47:53 UTC 20130.0 34
> server3   Thu Mar 14 03:46:53 UTC 2013465.0   42
> server4   Thu Mar 14 03:47:53 UTC 201311455.0 282
> server5   Thu Mar 14 03:47:53 UTC 20130.0 34
> Total:5   11920   425
> Notice that server4 has 282 regions, while the others have much less. Plus 
> for one table with 260 regions has been super imbalanced:
> {code}
> Regions by Region Server
> Region Server Region Count
> http://server3:60030/ 10
> http://server4:60030/ 250
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7704) migration tool that checks presence of HFile V1 files

2013-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629664#comment-13629664
 ] 

Hadoop QA commented on HBASE-7704:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12578326/HBASE-7704-v5.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.TestFullLogReconstruction

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5279//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5279//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5279//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5279//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5279//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5279//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5279//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5279//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5279//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5279//console

This message is automatically generated.

> migration tool that checks presence of HFile V1 files
> -
>
> Key: HBASE-7704
> URL: https://issues.apache.org/jira/browse/HBASE-7704
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Himanshu Vashishtha
>Priority: Blocker
> Fix For: 0.95.1
>
> Attachments: HBase-7704-v1.patch, HBase-7704-v2.patch, 
> HBase-7704-v3.patch, HBase-7704-v4.patch, HBASE-7704-v5.patch
>
>
> Below was Stack's comment from HBASE-7660:
> Regards the migration 'tool', or 'tool' to check for presence of v1 files, I 
> imagine it as an addition to the hfile tool 
> http://hbase.apache.org/book.html#hfile_tool2 The hfile tool already takes a 
> bunch of args including printing out meta. We could add an option to print 
> out version only – or return 1 if version 1 or some such – and then do a bit 
> of code to just list all hfiles and run this script against each. Could MR it 
> if too many files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7704) migration tool that checks presence of HFile V1 files

2013-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629657#comment-13629657
 ] 

Hadoop QA commented on HBASE-7704:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12578322/HBASE-7704-v5.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5278//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5278//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5278//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5278//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5278//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5278//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5278//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5278//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5278//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5278//console

This message is automatically generated.

> migration tool that checks presence of HFile V1 files
> -
>
> Key: HBASE-7704
> URL: https://issues.apache.org/jira/browse/HBASE-7704
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Himanshu Vashishtha
>Priority: Blocker
> Fix For: 0.95.1
>
> Attachments: HBase-7704-v1.patch, HBase-7704-v2.patch, 
> HBase-7704-v3.patch, HBase-7704-v4.patch, HBASE-7704-v5.patch
>
>
> Below was Stack's comment from HBASE-7660:
> Regards the migration 'tool', or 'tool' to check for presence of v1 files, I 
> imagine it as an addition to the hfile tool 
> http://hbase.apache.org/book.html#hfile_tool2 The hfile tool already takes a 
> bunch of args including printing out meta. We could add an option to print 
> out version only – or return 1 if version 1 or some such – and then do a bit 
> of code to just list all hfiles and run this script against each. Could MR it 
> if too many files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7507) Make memstore flush be able to retry after exception

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629656#comment-13629656
 ] 

Hudson commented on HBASE-7507:
---

Integrated in HBase-0.94 #957 (See 
[https://builds.apache.org/job/HBase-0.94/957/])
HBASE-7929 Reapply hbase-7507 'Make memstore flush be able to retry after 
exception' to 0.94 branch. (Original patch by chunhui shen) (Revision 1467121)

 Result = SUCCESS

> Make memstore flush be able to retry after exception
> 
>
> Key: HBASE-7507
> URL: https://issues.apache.org/jira/browse/HBASE-7507
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.6, 0.95.0
>Reporter: chunhui shen
>Assignee: chunhui shen
>Priority: Critical
> Fix For: 0.94.6, 0.95.0
>
> Attachments: 7507-94.patch, 7507-trunk v1.patch, 7507-trunk v2.patch, 
> 7507-trunkv3.patch
>
>
> We will abort regionserver if memstore flush throws exception.
> I thinks we could do retry to make regionserver more stable because file 
> system may be not ok in a transient time. e.g. Switching namenode in the 
> NamenodeHA environment
> {code}
> HRegion#internalFlushcache(){
> ...
> try {
> ...
> }catch(Throwable t){
> DroppedSnapshotException dse = new DroppedSnapshotException("region: " +
>   Bytes.toStringBinary(getRegionName()));
> dse.initCause(t);
> throw dse;
> }
> ...
> }
> MemStoreFlusher#flushRegion(){
> ...
> region.flushcache();
> ...
>  try {
> }catch(DroppedSnapshotException ex){
> server.abort("Replay of HLog required. Forcing server shutdown", ex);
> }
> ...
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7929) Reapply hbase-7507 "Make memstore flush be able to retry after exception" to 0.94 branch.

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629655#comment-13629655
 ] 

Hudson commented on HBASE-7929:
---

Integrated in HBase-0.94 #957 (See 
[https://builds.apache.org/job/HBase-0.94/957/])
HBASE-7929 Reapply hbase-7507 'Make memstore flush be able to retry after 
exception' to 0.94 branch. (Original patch by chunhui shen) (Revision 1467121)

 Result = SUCCESS
larsh : 
Files : 
* /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/HConstants.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java


> Reapply hbase-7507 "Make memstore flush be able to retry after exception" to 
> 0.94 branch.
> -
>
> Key: HBASE-7929
> URL: https://issues.apache.org/jira/browse/HBASE-7929
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: stack
> Fix For: 0.94.7
>
>
> It was applied once then backed out because it seemed like it could be part 
> responsible for destabilizing unit tests.  Thinking is different now.  
> Retrying application.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8325) ReplicationSource read a empty HLog throws EOFException

2013-04-11 Thread Jieshan Bean (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629652#comment-13629652
 ] 

Jieshan Bean commented on HBASE-8325:
-

Aggree, the latest patch in HBASE-7122 covers this issue, I suggest to resolve 
this issue as "duplicate". 

> ReplicationSource read a empty HLog throws EOFException
> ---
>
> Key: HBASE-8325
> URL: https://issues.apache.org/jira/browse/HBASE-8325
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.94.5
> Environment: replication enabled
>Reporter: zavakid
>Priority: Critical
>
> I'm using  the replication of Hbase in my test environment.
> When a replicationSource open a empty HLog, the EOFException throws. 
> It is because the Reader can't read the SequenceFile's meta data, but there's 
> no data at all, so it throws the EOFException.
> Should we detect the empty file and processed it, like we process the 
> FileNotFoundException?
> here's the code:
> {code:java}
> /**
>* Open a reader on the current path
>*
>* @param sleepMultiplier by how many times the default sleeping time is 
> augmented
>* @return true if we should continue with that file, false if we are over 
> with it
>*/
>   protected boolean openReader(int sleepMultiplier) {
> try {
>   LOG.debug("Opening log for replication " + this.currentPath.getName() +
>   " at " + this.repLogReader.getPosition());
>   try {
> this.reader = repLogReader.openReader(this.currentPath);
>   } catch (FileNotFoundException fnfe) {
> if (this.queueRecovered) {
>   // We didn't find the log in the archive directory, look if it still
>   // exists in the dead RS folder (there could be a chain of failures
>   // to look at)
>   LOG.info("NB dead servers : " + deadRegionServers.length);
>   for (int i = this.deadRegionServers.length - 1; i >= 0; i--) {
> Path deadRsDirectory =
> new Path(manager.getLogDir().getParent(), 
> this.deadRegionServers[i]);
> Path[] locs = new Path[] {
> new Path(deadRsDirectory, currentPath.getName()),
> new Path(deadRsDirectory.suffix(HLog.SPLITTING_EXT),
>   currentPath.getName()),
> };
> for (Path possibleLogLocation : locs) {
>   LOG.info("Possible location " + 
> possibleLogLocation.toUri().toString());
>   if (this.manager.getFs().exists(possibleLogLocation)) {
> // We found the right new location
> LOG.info("Log " + this.currentPath + " still exists at " +
> possibleLogLocation);
> // Breaking here will make us sleep since reader is null
> return true;
>   }
> }
>   }
>   // TODO What happens if the log was missing from every single 
> location?
>   // Although we need to check a couple of times as the log could have
>   // been moved by the master between the checks
>   // It can also happen if a recovered queue wasn't properly cleaned,
>   // such that the znode pointing to a log exists but the log was
>   // deleted a long time ago.
>   // For the moment, we'll throw the IO and processEndOfFile
>   throw new IOException("File from recovered queue is " +
>   "nowhere to be found", fnfe);
> } else {
>   // If the log was archived, continue reading from there
>   Path archivedLogLocation =
>   new Path(manager.getOldLogDir(), currentPath.getName());
>   if (this.manager.getFs().exists(archivedLogLocation)) {
> currentPath = archivedLogLocation;
> LOG.info("Log " + this.currentPath + " was moved to " +
> archivedLogLocation);
> // Open the log at the new location
> this.openReader(sleepMultiplier);
>   }
>   // TODO What happens the log is missing in both places?
> }
>   }
> } catch (IOException ioe) {
>   LOG.warn(peerClusterZnode + " Got: ", ioe);
>   this.reader = null;
>   // TODO Need a better way to determinate if a file is really gone but
>   // TODO without scanning all logs dir
>   if (sleepMultiplier == this.maxRetriesMultiplier) {
> LOG.warn("Waited too long for this file, considering dumping");
> return !processEndOfFile();
>   }
> }
> return true;
>   }
> {code}
> there's a method called {code:java}processEndOfFile(){code}
> should we add this case in it?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please 

[jira] [Commented] (HBASE-1936) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629649#comment-13629649
 ] 

Hudson commented on HBASE-1936:
---

Integrated in HBase-TRUNK #4054 (See 
[https://builds.apache.org/job/HBase-TRUNK/4054/])
HBASE-1936 ClassLoader that loads from hdfs; useful adding filters to 
classpath without having to restart services (Revision 1467092)

 Result = FAILURE
jxiang : 
Files : 
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* 
/hbase/trunk/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestGet.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Base64.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java
* 
/hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestBase64.java
* 
/hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestDynamicClassLoader.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/Base64.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestBase64.java


> ClassLoader that loads from hdfs; useful adding filters to classpath without 
> having to restart services
> ---
>
> Key: HBASE-1936
> URL: https://issues.apache.org/jira/browse/HBASE-1936
> Project: HBase
>  Issue Type: New Feature
>Reporter: stack
>Assignee: Jimmy Xiang
>  Labels: noob
> Fix For: 0.98.0, 0.95.1
>
> Attachments: 0.94-1936.patch, cp_from_hdfs.patch, 
> HBASE-1936-trunk(forReview).patch, trunk-1936.patch, trunk-1936_v2.1.patch, 
> trunk-1936_v2.2.patch, trunk-1936_v2.patch, trunk-1936_v3.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7605) TestMiniClusterLoadSequential fails in trunk build on hadoop 2

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629650#comment-13629650
 ] 

Hudson commented on HBASE-7605:
---

Integrated in HBase-TRUNK #4054 (See 
[https://builds.apache.org/job/HBase-TRUNK/4054/])
HBASE-7605 TestMiniClusterLoadSequential fails in trunk build on hadoop2 
(Revision 1467135)

 Result = FAILURE
jmhsieh : 
Files : 
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadEncoded.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadSequential.java


> TestMiniClusterLoadSequential fails in trunk build on hadoop 2
> --
>
> Key: HBASE-7605
> URL: https://issues.apache.org/jira/browse/HBASE-7605
> Project: HBase
>  Issue Type: Sub-task
>  Components: hadoop2, test
>Reporter: Ted Yu
>Assignee: Jonathan Hsieh
>Priority: Critical
> Fix For: 0.98.0, 0.95.1
>
> Attachments: hbase-7605.patch
>
>
> From HBase-TRUNK-on-Hadoop-2.0.0 #354:
>   loadTest[0](org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential): 
> test timed out after 12 milliseconds
>   loadTest[1](org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential): 
> test timed out after 12 milliseconds
>   loadTest[2](org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential): 
> test timed out after 12 milliseconds
>   loadTest[3](org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential): 
> test timed out after 12 milliseconds

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8119) Optimize StochasticLoadBalancer

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629648#comment-13629648
 ] 

Hudson commented on HBASE-8119:
---

Integrated in HBase-TRUNK #4054 (See 
[https://builds.apache.org/job/HBase-TRUNK/4054/])
HBASE-8119 Optimize StochasticLoadBalancer (Revision 1467109)

 Result = FAILURE
enis : 
Files : 
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/BalancerTestBase.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestStochasticLoadBalancer.java


> Optimize StochasticLoadBalancer
> ---
>
> Key: HBASE-8119
> URL: https://issues.apache.org/jira/browse/HBASE-8119
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 0.95.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Critical
> Fix For: 0.98.0, 0.95.1
>
> Attachments: hbase-8119_v2.patch, hbase-8119_v3.patch
>
>
> On a 5 node trunk cluster, I ran into a weird problem with 
> StochasticLoadBalancer:
> server1   Thu Mar 14 03:42:50 UTC 20130.0 33
> server2   Thu Mar 14 03:47:53 UTC 20130.0 34
> server3   Thu Mar 14 03:46:53 UTC 2013465.0   42
> server4   Thu Mar 14 03:47:53 UTC 201311455.0 282
> server5   Thu Mar 14 03:47:53 UTC 20130.0 34
> Total:5   11920   425
> Notice that server4 has 282 regions, while the others have much less. Plus 
> for one table with 260 regions has been super imbalanced:
> {code}
> Regions by Region Server
> Region Server Region Count
> http://server3:60030/ 10
> http://server4:60030/ 250
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7255) KV size metric went missing from StoreScanner.

2013-04-11 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629646#comment-13629646
 ] 

Elliott Clark commented on HBASE-7255:
--

I pulled the size computation out to so that compactions don't pay the cost of 
computing the size at at all; additionally that cleans up the StoreScanner 
interface a lot.

> KV size metric went missing from StoreScanner.
> --
>
> Key: HBASE-7255
> URL: https://issues.apache.org/jira/browse/HBASE-7255
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.95.1
>
> Attachments: HBASE-7255-0.patch, HBASE-7255-1.patch, 
> HBASE-7255-2.patch, HBASE-7255-3.patch
>
>
> In trunk due to the metric refactor, at least the KV size metric went missing.
> See this code in StoreScanner.java:
> {code}
> } finally {
>   if (cumulativeMetric > 0 && metric != null) {
>   }
> }
> {code}
> Just an empty if statement, where the metric used to be collected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7704) migration tool that checks presence of HFile V1 files

2013-04-11 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629639#comment-13629639
 ] 

stack commented on HBASE-7704:
--

Looks good Himanshu.  Can you paste the output this tool makes when you do -h 
and then when you actually run it?  Thanks.

> migration tool that checks presence of HFile V1 files
> -
>
> Key: HBASE-7704
> URL: https://issues.apache.org/jira/browse/HBASE-7704
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Himanshu Vashishtha
>Priority: Blocker
> Fix For: 0.95.1
>
> Attachments: HBase-7704-v1.patch, HBase-7704-v2.patch, 
> HBase-7704-v3.patch, HBase-7704-v4.patch, HBASE-7704-v5.patch
>
>
> Below was Stack's comment from HBASE-7660:
> Regards the migration 'tool', or 'tool' to check for presence of v1 files, I 
> imagine it as an addition to the hfile tool 
> http://hbase.apache.org/book.html#hfile_tool2 The hfile tool already takes a 
> bunch of args including printing out meta. We could add an option to print 
> out version only – or return 1 if version 1 or some such – and then do a bit 
> of code to just list all hfiles and run this script against each. Could MR it 
> if too many files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8324) TestHFileOutputFormat.testMRIncremental* fails against hadoop2 profile

2013-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629629#comment-13629629
 ] 

Hadoop QA commented on HBASE-8324:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12578310/hbase-8324.hadoop2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5277//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5277//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5277//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5277//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5277//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5277//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5277//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5277//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5277//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5277//console

This message is automatically generated.

> TestHFileOutputFormat.testMRIncremental* fails against hadoop2 profile
> --
>
> Key: HBASE-8324
> URL: https://issues.apache.org/jira/browse/HBASE-8324
> Project: HBase
>  Issue Type: Sub-task
>  Components: hadoop2, test
>Affects Versions: 0.95.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Fix For: 0.98.0, 0.95.1
>
> Attachments: hbase-8324.hadoop2.patch
>
>
> Two tests cases are failing:
> testMRIncrementalLoad, testMRIncrementalloadWithSplit
> {code}
>  classname="org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat" 
> name="testMRIncrementalLoad">
> java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.runIncrementalPELoad(TestHFileOutputFormat.java:468)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.doIncrementalLoadTest(TestHFileOutputFormat.java:378)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.testMRIncrementalLoad(TestHFileOutputFormat.java:348)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ...
>classname="org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat" 
> name="testMRIncrementalLoadWithSplit">
> java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.runIncrementalPELoad(TestHFileOutputFormat.java:468)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.doIncrementalLoadTest(TestHFileOutputFormat.java:378)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.testMRIncrementalLoadWithSplit(TestHFileOutputFormat.java:354)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ...
> {code}

--
This message is automatically gener

[jira] [Commented] (HBASE-1936) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629626#comment-13629626
 ] 

Hudson commented on HBASE-1936:
---

Integrated in hbase-0.95-on-hadoop2 #66 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/66/])
HBASE-1936 ClassLoader that loads from hdfs; useful adding filters to 
classpath without having to restart services (Revision 1467094)

 Result = FAILURE
jxiang : 
Files : 
* 
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* 
/hbase/branches/0.95/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestGet.java
* 
/hbase/branches/0.95/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Base64.java
* 
/hbase/branches/0.95/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java
* 
/hbase/branches/0.95/hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestBase64.java
* 
/hbase/branches/0.95/hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestDynamicClassLoader.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/util/Base64.java
* 
/hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestBase64.java


> ClassLoader that loads from hdfs; useful adding filters to classpath without 
> having to restart services
> ---
>
> Key: HBASE-1936
> URL: https://issues.apache.org/jira/browse/HBASE-1936
> Project: HBase
>  Issue Type: New Feature
>Reporter: stack
>Assignee: Jimmy Xiang
>  Labels: noob
> Fix For: 0.98.0, 0.95.1
>
> Attachments: 0.94-1936.patch, cp_from_hdfs.patch, 
> HBASE-1936-trunk(forReview).patch, trunk-1936.patch, trunk-1936_v2.1.patch, 
> trunk-1936_v2.2.patch, trunk-1936_v2.patch, trunk-1936_v3.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7605) TestMiniClusterLoadSequential fails in trunk build on hadoop 2

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629627#comment-13629627
 ] 

Hudson commented on HBASE-7605:
---

Integrated in hbase-0.95-on-hadoop2 #66 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/66/])
HBASE-7605 TestMiniClusterLoadSequential fails in trunk build on hadoop2 
(Revision 1467134)

 Result = FAILURE
jmhsieh : 
Files : 
* 
/hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadEncoded.java
* 
/hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadSequential.java


> TestMiniClusterLoadSequential fails in trunk build on hadoop 2
> --
>
> Key: HBASE-7605
> URL: https://issues.apache.org/jira/browse/HBASE-7605
> Project: HBase
>  Issue Type: Sub-task
>  Components: hadoop2, test
>Reporter: Ted Yu
>Assignee: Jonathan Hsieh
>Priority: Critical
> Fix For: 0.98.0, 0.95.1
>
> Attachments: hbase-7605.patch
>
>
> From HBase-TRUNK-on-Hadoop-2.0.0 #354:
>   loadTest[0](org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential): 
> test timed out after 12 milliseconds
>   loadTest[1](org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential): 
> test timed out after 12 milliseconds
>   loadTest[2](org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential): 
> test timed out after 12 milliseconds
>   loadTest[3](org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential): 
> test timed out after 12 milliseconds

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8119) Optimize StochasticLoadBalancer

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629625#comment-13629625
 ] 

Hudson commented on HBASE-8119:
---

Integrated in hbase-0.95-on-hadoop2 #66 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/66/])
HBASE-8119 Optimize StochasticLoadBalancer (Revision 1467111)

 Result = FAILURE
enis : 
Files : 
* 
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java
* 
/hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/BalancerTestBase.java
* 
/hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestStochasticLoadBalancer.java


> Optimize StochasticLoadBalancer
> ---
>
> Key: HBASE-8119
> URL: https://issues.apache.org/jira/browse/HBASE-8119
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 0.95.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Critical
> Fix For: 0.98.0, 0.95.1
>
> Attachments: hbase-8119_v2.patch, hbase-8119_v3.patch
>
>
> On a 5 node trunk cluster, I ran into a weird problem with 
> StochasticLoadBalancer:
> server1   Thu Mar 14 03:42:50 UTC 20130.0 33
> server2   Thu Mar 14 03:47:53 UTC 20130.0 34
> server3   Thu Mar 14 03:46:53 UTC 2013465.0   42
> server4   Thu Mar 14 03:47:53 UTC 201311455.0 282
> server5   Thu Mar 14 03:47:53 UTC 20130.0 34
> Total:5   11920   425
> Notice that server4 has 282 regions, while the others have much less. Plus 
> for one table with 260 regions has been super imbalanced:
> {code}
> Regions by Region Server
> Region Server Region Count
> http://server3:60030/ 10
> http://server4:60030/ 250
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7704) migration tool that checks presence of HFile V1 files

2013-04-11 Thread Himanshu Vashishtha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Himanshu Vashishtha updated HBASE-7704:
---

Attachment: HBASE-7704-v5.patch

removed a unused import.

> migration tool that checks presence of HFile V1 files
> -
>
> Key: HBASE-7704
> URL: https://issues.apache.org/jira/browse/HBASE-7704
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Himanshu Vashishtha
>Priority: Blocker
> Fix For: 0.95.1
>
> Attachments: HBase-7704-v1.patch, HBase-7704-v2.patch, 
> HBase-7704-v3.patch, HBase-7704-v4.patch, HBASE-7704-v5.patch
>
>
> Below was Stack's comment from HBASE-7660:
> Regards the migration 'tool', or 'tool' to check for presence of v1 files, I 
> imagine it as an addition to the hfile tool 
> http://hbase.apache.org/book.html#hfile_tool2 The hfile tool already takes a 
> bunch of args including printing out meta. We could add an option to print 
> out version only – or return 1 if version 1 or some such – and then do a bit 
> of code to just list all hfiles and run this script against each. Could MR it 
> if too many files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-1936) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services

2013-04-11 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629617#comment-13629617
 ] 

Jimmy Xiang commented on HBASE-1936:


Cool, will change the error messages as suggested. Thanks.

> ClassLoader that loads from hdfs; useful adding filters to classpath without 
> having to restart services
> ---
>
> Key: HBASE-1936
> URL: https://issues.apache.org/jira/browse/HBASE-1936
> Project: HBase
>  Issue Type: New Feature
>Reporter: stack
>Assignee: Jimmy Xiang
>  Labels: noob
> Fix For: 0.98.0, 0.95.1
>
> Attachments: 0.94-1936.patch, cp_from_hdfs.patch, 
> HBASE-1936-trunk(forReview).patch, trunk-1936.patch, trunk-1936_v2.1.patch, 
> trunk-1936_v2.2.patch, trunk-1936_v2.patch, trunk-1936_v3.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7704) migration tool that checks presence of HFile V1 files

2013-04-11 Thread Himanshu Vashishtha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Himanshu Vashishtha updated HBASE-7704:
---

Attachment: (was: HBASE-7704-v5.patch)

> migration tool that checks presence of HFile V1 files
> -
>
> Key: HBASE-7704
> URL: https://issues.apache.org/jira/browse/HBASE-7704
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Himanshu Vashishtha
>Priority: Blocker
> Fix For: 0.95.1
>
> Attachments: HBase-7704-v1.patch, HBase-7704-v2.patch, 
> HBase-7704-v3.patch, HBase-7704-v4.patch, HBASE-7704-v5.patch
>
>
> Below was Stack's comment from HBASE-7660:
> Regards the migration 'tool', or 'tool' to check for presence of v1 files, I 
> imagine it as an addition to the hfile tool 
> http://hbase.apache.org/book.html#hfile_tool2 The hfile tool already takes a 
> bunch of args including printing out meta. We could add an option to print 
> out version only – or return 1 if version 1 or some such – and then do a bit 
> of code to just list all hfiles and run this script against each. Could MR it 
> if too many files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-1936) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services

2013-04-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629614#comment-13629614
 ] 

Ted Yu commented on HBASE-1936:
---

For error messages, how about the following ?
{code}
+} catch (InstantiationException e) {
+  throw new RuntimeException("Couldn't instantiate " + className, e);
+} catch (IllegalAccessException e) {
+  throw new RuntimeException("No access to " + className, e);
{code}
I am fine with keeping class loader variable name.

> ClassLoader that loads from hdfs; useful adding filters to classpath without 
> having to restart services
> ---
>
> Key: HBASE-1936
> URL: https://issues.apache.org/jira/browse/HBASE-1936
> Project: HBase
>  Issue Type: New Feature
>Reporter: stack
>Assignee: Jimmy Xiang
>  Labels: noob
> Fix For: 0.98.0, 0.95.1
>
> Attachments: 0.94-1936.patch, cp_from_hdfs.patch, 
> HBASE-1936-trunk(forReview).patch, trunk-1936.patch, trunk-1936_v2.1.patch, 
> trunk-1936_v2.2.patch, trunk-1936_v2.patch, trunk-1936_v3.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8285) HBaseClient never recovers for single HTable.get() calls with no retries when regions move

2013-04-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629609#comment-13629609
 ] 

Ted Yu commented on HBASE-8285:
---

Minor comment for patch v5:
{code}
+  // if there's something in the cache for this table.
+  tableLocations.remove(location.getRegionInfo().getStartKey());
{code}
'for this table' -> 'for this region'

Can you add a check for the (previous) value returned from remove() and only 
log the DEBUG message if the value is not null ?

> HBaseClient never recovers for single HTable.get() calls with no retries when 
> regions move
> --
>
> Key: HBASE-8285
> URL: https://issues.apache.org/jira/browse/HBASE-8285
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.94.6.1
>Reporter: Varun Sharma
>Assignee: Varun Sharma
>Priority: Critical
> Fix For: 0.98.0, 0.94.7, 0.95.1
>
> Attachments: 8285-0.94.txt, 8285-0.94-v2.txt, 8285-0.94-v3.txt, 
> 8285-0.94-v4.txt, 8285-0.94-v5.txt, 8285-trunk.txt, 8285-trunk-v2.txt
>
>
> Steps to reproduce this bug:
> 1) Gracefull restart a region server causing regions to get redistributed.
> 2) Client call to this region keeps failing since Meta Cache is never purged 
> on the client for the region that moved.
> Reason behind the bug:
> 1) Client continues to hit the old region server.
> 2) The old region server throws NotServingRegionException which is not 
> handled correctly and the META cache entries are never purged for that server 
> causing the client to keep hitting the old server.
> The reason lies in ServerCallable code since we only purge META cache entries 
> when there is a RetriesExhaustedException, SocketTimeoutException or 
> ConnectException. However, there is no case check for 
> NotServingRegionException(s).
> Why is this not a problem for Scan(s) and Put(s) ?
> a) If a region server is not hosting a region/scanner, then an 
> UnknownScannerException is thrown which causes a relocateRegion() call 
> causing a refresh of the META cache for that particular region.
> b) For put(s), the processBatchCallback() interface in HConnectionManager is 
> used which clears out META cache entries for all kinds of exceptions except 
> DoNotRetryException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-1936) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services

2013-04-11 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629602#comment-13629602
 ] 

Jimmy Xiang commented on HBASE-1936:


[~yuzhih...@gmail.com], the error message is from the original readFilelds 
method of the filters. How do you want the error message changed?
As to the variable name, we can change if a new one is introduced in the 
future? I agree it is better to be specific, but not sure what use cases we 
will have in the future.

> ClassLoader that loads from hdfs; useful adding filters to classpath without 
> having to restart services
> ---
>
> Key: HBASE-1936
> URL: https://issues.apache.org/jira/browse/HBASE-1936
> Project: HBase
>  Issue Type: New Feature
>Reporter: stack
>Assignee: Jimmy Xiang
>  Labels: noob
> Fix For: 0.98.0, 0.95.1
>
> Attachments: 0.94-1936.patch, cp_from_hdfs.patch, 
> HBASE-1936-trunk(forReview).patch, trunk-1936.patch, trunk-1936_v2.1.patch, 
> trunk-1936_v2.2.patch, trunk-1936_v2.patch, trunk-1936_v3.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7704) migration tool that checks presence of HFile V1 files

2013-04-11 Thread Himanshu Vashishtha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Himanshu Vashishtha updated HBASE-7704:
---

Attachment: HBASE-7704-v5.patch

Handling of HFileLinks, and changes as per Matteo's comments are added. Tested 
the script against a hbase installation which has snapshots in it. It works 
good. Please comment.

> migration tool that checks presence of HFile V1 files
> -
>
> Key: HBASE-7704
> URL: https://issues.apache.org/jira/browse/HBASE-7704
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Himanshu Vashishtha
>Priority: Blocker
> Fix For: 0.95.1
>
> Attachments: HBase-7704-v1.patch, HBase-7704-v2.patch, 
> HBase-7704-v3.patch, HBase-7704-v4.patch, HBASE-7704-v5.patch
>
>
> Below was Stack's comment from HBASE-7660:
> Regards the migration 'tool', or 'tool' to check for presence of v1 files, I 
> imagine it as an addition to the hfile tool 
> http://hbase.apache.org/book.html#hfile_tool2 The hfile tool already takes a 
> bunch of args including printing out meta. We could add an option to print 
> out version only – or return 1 if version 1 or some such – and then do a bit 
> of code to just list all hfiles and run this script against each. Could MR it 
> if too many files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7801) Allow a deferred sync option per Mutation.

2013-04-11 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629586#comment-13629586
 ] 

Lars Hofhansl commented on HBASE-7801:
--

Enis and Anoop +1'd already (and I have only clarified the code and added a 
test since then).
If there are no objections I will commit this tomorrow.

I would also like to have this client side API in 0.94 (but without the rest of 
the intrusive changes). Thinking about how to do that is backward and binary 
compatible way now.

> Allow a deferred sync option per Mutation.
> --
>
> Key: HBASE-7801
> URL: https://issues.apache.org/jira/browse/HBASE-7801
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.94.6, 0.95.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.98.0, 0.94.7, 0.95.1
>
> Attachments: 7801-0.94-v1.txt, 7801-0.94-v2.txt, 7801-0.94-v3.txt, 
> 7801-0.96-full-v2.txt, 7801-0.96-full-v3.txt, 7801-0.96-full-v4.txt, 
> 7801-0.96-full-v5.txt, 7801-0.96-v10.txt, 7801-0.96-v1.txt, 7801-0.96-v6.txt, 
> 7801-0.96-v7.txt, 7801-0.96-v8.txt, 7801-0.96-v9.txt
>
>
> Won't have time for parent. But a deferred sync option on a per operation 
> basis comes up quite frequently.
> In 0.96 this can be handled cleanly via protobufs and 0.94 we can have a 
> special mutation attribute.
> For batch operation we'd take the safest sync option of any of the mutations. 
> I.e. if there is at least one that wants to be flushed we'd sync the batch, 
> if there's none of those but at least one that wants deferred flush we defer 
> flush the batch, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-8328) The servername regex in HServerInfo accepts invalid hostnames

2013-04-11 Thread Gaurav Menghani (JIRA)
Gaurav Menghani created HBASE-8328:
--

 Summary: The servername regex in HServerInfo accepts invalid 
hostnames
 Key: HBASE-8328
 URL: https://issues.apache.org/jira/browse/HBASE-8328
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.89-fb
Reporter: Gaurav Menghani


The HServerInfo regex matches invalid hostnames like " " (a single space), 
"!#$!", etc. It should be made more strict to follow the DNS RFC more closely.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-1936) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services

2013-04-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629584#comment-13629584
 ] 

Ted Yu commented on HBASE-1936:
---

Minor comments for 0.94 patch :
{code}
+} catch (InstantiationException e) {
+  throw new RuntimeException("Failed deserialize.", e);
+} catch (IllegalAccessException e) {
+  throw new RuntimeException("Failed deserialize.", e);
{code}
Can we have better error message above ?
{code}
+   * Dynamic class loader to load filter/comparators
+   */
+  private final static ClassLoader CLASS_LOADER;
{code}
What if class loader for other purpose is introduced in the future ? Should the 
above variable name be more specific ?

> ClassLoader that loads from hdfs; useful adding filters to classpath without 
> having to restart services
> ---
>
> Key: HBASE-1936
> URL: https://issues.apache.org/jira/browse/HBASE-1936
> Project: HBase
>  Issue Type: New Feature
>Reporter: stack
>Assignee: Jimmy Xiang
>  Labels: noob
> Fix For: 0.98.0, 0.95.1
>
> Attachments: 0.94-1936.patch, cp_from_hdfs.patch, 
> HBASE-1936-trunk(forReview).patch, trunk-1936.patch, trunk-1936_v2.1.patch, 
> trunk-1936_v2.2.patch, trunk-1936_v2.patch, trunk-1936_v3.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-1936) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services

2013-04-11 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629582#comment-13629582
 ] 

Lars Hofhansl commented on HBASE-1936:
--

Looks good to me. I'll do some more testing too.
[~giacomotaylor] Do you guys want to have a look?

> ClassLoader that loads from hdfs; useful adding filters to classpath without 
> having to restart services
> ---
>
> Key: HBASE-1936
> URL: https://issues.apache.org/jira/browse/HBASE-1936
> Project: HBase
>  Issue Type: New Feature
>Reporter: stack
>Assignee: Jimmy Xiang
>  Labels: noob
> Fix For: 0.98.0, 0.95.1
>
> Attachments: 0.94-1936.patch, cp_from_hdfs.patch, 
> HBASE-1936-trunk(forReview).patch, trunk-1936.patch, trunk-1936_v2.1.patch, 
> trunk-1936_v2.2.patch, trunk-1936_v2.patch, trunk-1936_v3.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-1936) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services

2013-04-11 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-1936:
---

Attachment: 0.94-1936.patch

Attached is the patch backported to 0.94. Due to Filter class changes (because 
of protobuf), the patch is a little different (trunk version, the class loader 
is used in ProtobufUtil; this version, it is used in Get/Scan and a couple 
filters). However the class loader remains the same.

> ClassLoader that loads from hdfs; useful adding filters to classpath without 
> having to restart services
> ---
>
> Key: HBASE-1936
> URL: https://issues.apache.org/jira/browse/HBASE-1936
> Project: HBase
>  Issue Type: New Feature
>Reporter: stack
>Assignee: Jimmy Xiang
>  Labels: noob
> Fix For: 0.98.0, 0.95.1
>
> Attachments: 0.94-1936.patch, cp_from_hdfs.patch, 
> HBASE-1936-trunk(forReview).patch, trunk-1936.patch, trunk-1936_v2.1.patch, 
> trunk-1936_v2.2.patch, trunk-1936_v2.patch, trunk-1936_v3.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8119) Optimize StochasticLoadBalancer

2013-04-11 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629553#comment-13629553
 ] 

Elliott Clark commented on HBASE-8119:
--

Thanks for the perf work. Next time we get a 0.95 rc out I'll make sure to test 
the balancer over a large cluster.  

> Optimize StochasticLoadBalancer
> ---
>
> Key: HBASE-8119
> URL: https://issues.apache.org/jira/browse/HBASE-8119
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 0.95.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Critical
> Fix For: 0.98.0, 0.95.1
>
> Attachments: hbase-8119_v2.patch, hbase-8119_v3.patch
>
>
> On a 5 node trunk cluster, I ran into a weird problem with 
> StochasticLoadBalancer:
> server1   Thu Mar 14 03:42:50 UTC 20130.0 33
> server2   Thu Mar 14 03:47:53 UTC 20130.0 34
> server3   Thu Mar 14 03:46:53 UTC 2013465.0   42
> server4   Thu Mar 14 03:47:53 UTC 201311455.0 282
> server5   Thu Mar 14 03:47:53 UTC 20130.0 34
> Total:5   11920   425
> Notice that server4 has 282 regions, while the others have much less. Plus 
> for one table with 260 regions has been super imbalanced:
> {code}
> Regions by Region Server
> Region Server Region Count
> http://server3:60030/ 10
> http://server4:60030/ 250
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7605) TestMiniClusterLoadSequential fails in trunk build on hadoop 2

2013-04-11 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-7605:
--

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks for the review Ted.  Committed to 0.95.1 and 0.98.0

> TestMiniClusterLoadSequential fails in trunk build on hadoop 2
> --
>
> Key: HBASE-7605
> URL: https://issues.apache.org/jira/browse/HBASE-7605
> Project: HBase
>  Issue Type: Sub-task
>  Components: hadoop2, test
>Reporter: Ted Yu
>Assignee: Jonathan Hsieh
>Priority: Critical
> Fix For: 0.98.0, 0.95.1
>
> Attachments: hbase-7605.patch
>
>
> From HBase-TRUNK-on-Hadoop-2.0.0 #354:
>   loadTest[0](org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential): 
> test timed out after 12 milliseconds
>   loadTest[1](org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential): 
> test timed out after 12 milliseconds
>   loadTest[2](org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential): 
> test timed out after 12 milliseconds
>   loadTest[3](org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential): 
> test timed out after 12 milliseconds

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8205) HBCK support for table locks

2013-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629546#comment-13629546
 ] 

Hadoop QA commented on HBASE-8205:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12578294/hbase-8205_v4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 6 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces lines longer than 
100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5276//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5276//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5276//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5276//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5276//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5276//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5276//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5276//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5276//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5276//console

This message is automatically generated.

> HBCK support for table locks
> 
>
> Key: HBASE-8205
> URL: https://issues.apache.org/jira/browse/HBASE-8205
> Project: HBase
>  Issue Type: Improvement
>  Components: hbck, master, regionserver
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 0.95.1
>
> Attachments: hbase-8205_v1.patch, hbase-8205_v2.patch, 
> hbase-8205_v4.patch
>
>
> Table locks have been introduced in HBASE-7305, HBASE-7546, and others (see 
> the design doc at HBASE-7305). 
> This issue adds support in HBCK to report and fix possible conditions about 
> table locks. Namely, if due to some bug, the table lock remains not-released, 
> then HBCK should be able to report it, and remove the lock, so that normal 
> table operations will continue. 
> Also see the comments in HBASE-7977. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8324) TestHFileOutputFormat.testMRIncremental* fails against hadoop2 profile

2013-04-11 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-8324:
--

Attachment: hbase-8324.hadoop2.patch

patch uses the hadoop.profile=2.0 trick to do the precommit run against hadoop2.

> TestHFileOutputFormat.testMRIncremental* fails against hadoop2 profile
> --
>
> Key: HBASE-8324
> URL: https://issues.apache.org/jira/browse/HBASE-8324
> Project: HBase
>  Issue Type: Sub-task
>  Components: hadoop2, test
>Affects Versions: 0.95.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Fix For: 0.95.1
>
> Attachments: hbase-8324.hadoop2.patch
>
>
> Two tests cases are failing:
> testMRIncrementalLoad, testMRIncrementalloadWithSplit
> {code}
>  classname="org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat" 
> name="testMRIncrementalLoad">
> java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.runIncrementalPELoad(TestHFileOutputFormat.java:468)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.doIncrementalLoadTest(TestHFileOutputFormat.java:378)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.testMRIncrementalLoad(TestHFileOutputFormat.java:348)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ...
>classname="org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat" 
> name="testMRIncrementalLoadWithSplit">
> java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.runIncrementalPELoad(TestHFileOutputFormat.java:468)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.doIncrementalLoadTest(TestHFileOutputFormat.java:378)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.testMRIncrementalLoadWithSplit(TestHFileOutputFormat.java:354)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ...
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8324) TestHFileOutputFormat.testMRIncremental* fails against hadoop2 profile

2013-04-11 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-8324:
--

Fix Version/s: 0.98.0
   Status: Patch Available  (was: Open)

> TestHFileOutputFormat.testMRIncremental* fails against hadoop2 profile
> --
>
> Key: HBASE-8324
> URL: https://issues.apache.org/jira/browse/HBASE-8324
> Project: HBase
>  Issue Type: Sub-task
>  Components: hadoop2, test
>Affects Versions: 0.95.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Fix For: 0.98.0, 0.95.1
>
> Attachments: hbase-8324.hadoop2.patch
>
>
> Two tests cases are failing:
> testMRIncrementalLoad, testMRIncrementalloadWithSplit
> {code}
>  classname="org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat" 
> name="testMRIncrementalLoad">
> java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.runIncrementalPELoad(TestHFileOutputFormat.java:468)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.doIncrementalLoadTest(TestHFileOutputFormat.java:378)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.testMRIncrementalLoad(TestHFileOutputFormat.java:348)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ...
>classname="org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat" 
> name="testMRIncrementalLoadWithSplit">
> java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.runIncrementalPELoad(TestHFileOutputFormat.java:468)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.doIncrementalLoadTest(TestHFileOutputFormat.java:378)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.testMRIncrementalLoadWithSplit(TestHFileOutputFormat.java:354)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ...
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8324) TestHFileOutputFormat.testMRIncremental* fails against hadoop2 profile

2013-04-11 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629527#comment-13629527
 ] 

Jonathan Hsieh commented on HBASE-8324:
---

And it passes consistently now.

{code}
$ $ mvn clean  -Dmaven.test.redirectOutputToFile=true test 
-Dtest=TestHFileOutputFormat#testMRIncre* -Dhadoop.profile=2.0
...
---
 T E S T S
---
Running org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 113.794 sec

{code}

> TestHFileOutputFormat.testMRIncremental* fails against hadoop2 profile
> --
>
> Key: HBASE-8324
> URL: https://issues.apache.org/jira/browse/HBASE-8324
> Project: HBase
>  Issue Type: Sub-task
>  Components: hadoop2, test
>Affects Versions: 0.95.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Fix For: 0.95.1
>
>
> Two tests cases are failing:
> testMRIncrementalLoad, testMRIncrementalloadWithSplit
> {code}
>  classname="org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat" 
> name="testMRIncrementalLoad">
> java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.runIncrementalPELoad(TestHFileOutputFormat.java:468)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.doIncrementalLoadTest(TestHFileOutputFormat.java:378)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.testMRIncrementalLoad(TestHFileOutputFormat.java:348)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ...
>classname="org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat" 
> name="testMRIncrementalLoadWithSplit">
> java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.runIncrementalPELoad(TestHFileOutputFormat.java:468)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.doIncrementalLoadTest(TestHFileOutputFormat.java:378)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.testMRIncrementalLoadWithSplit(TestHFileOutputFormat.java:354)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ...
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8324) TestHFileOutputFormat.testMRIncremental* fails against hadoop2 profile

2013-04-11 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629515#comment-13629515
 ] 

Jonathan Hsieh commented on HBASE-8324:
---

Talked with [~sandyr], and we found this in the MRAppMaster's logs:

{code}
2013-04-11 14:29:30,234 INFO [AsyncDispatcher event handler] 
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: 
attempt_1365715697057_0001_r_01_1 TaskAttempt Transitioned from UNASSIGNED 
to KILLED
2013-04-11 14:29:30,234 INFO [Thread-47] 
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Processing the 
event EventType: CONTAINER_DEALLOCATE
2013-04-11 14:29:30,235 ERROR [AsyncDispatcher event handler] 
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Unexpected event for 
REDUCE task T_ATTEMPT_KILLED
2013-04-11 14:29:30,235 ERROR [AsyncDispatcher event handler] 
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Invalid event 
T_ATTEMPT_KILLED on Task task_1365715697057_0001_r_01
2013-04-11 14:29:30,241 INFO [AsyncDispatcher event handler] 
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1365715697057_0001Job 
Transitioned from RUNNING to ERROR
2013-04-11 14:29:30,261 INFO [IPC Server handler 12 on 60014] 
org.apache.hadoop.mapred.TaskAttemptListenerImpl: Commit-pending state update 
from attempt_1365715697057_0001_r_04_0
{code}

This is related to MAPREDUCE-4880 which is in turn fixed by MAPREDUCE-4607 (a 
race in speculative task execution, fixed in 2.0.3-alpha).  Compiling and 
running against hadoop-2.0.3-alpha fails out even earlier so instead of going 
that route, I'm going to try an alternate workaround -- disabling mapper and 
reducer speculative execution.

> TestHFileOutputFormat.testMRIncremental* fails against hadoop2 profile
> --
>
> Key: HBASE-8324
> URL: https://issues.apache.org/jira/browse/HBASE-8324
> Project: HBase
>  Issue Type: Sub-task
>  Components: hadoop2, test
>Affects Versions: 0.95.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Fix For: 0.95.1
>
>
> Two tests cases are failing:
> testMRIncrementalLoad, testMRIncrementalloadWithSplit
> {code}
>  classname="org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat" 
> name="testMRIncrementalLoad">
> java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.runIncrementalPELoad(TestHFileOutputFormat.java:468)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.doIncrementalLoadTest(TestHFileOutputFormat.java:378)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.testMRIncrementalLoad(TestHFileOutputFormat.java:348)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ...
>classname="org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat" 
> name="testMRIncrementalLoadWithSplit">
> java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.runIncrementalPELoad(TestHFileOutputFormat.java:468)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.doIncrementalLoadTest(TestHFileOutputFormat.java:378)
> at 
> org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.testMRIncrementalLoadWithSplit(TestHFileOutputFormat.java:354)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ...
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7929) Reapply hbase-7507 "Make memstore flush be able to retry after exception" to 0.94 branch.

2013-04-11 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629511#comment-13629511
 ] 

Enis Soztutar commented on HBASE-7929:
--

Thanks Lars, 4th time is the charm! 

> Reapply hbase-7507 "Make memstore flush be able to retry after exception" to 
> 0.94 branch.
> -
>
> Key: HBASE-7929
> URL: https://issues.apache.org/jira/browse/HBASE-7929
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: stack
> Fix For: 0.94.7
>
>
> It was applied once then backed out because it seemed like it could be part 
> responsible for destabilizing unit tests.  Thinking is different now.  
> Retrying application.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7801) Allow a deferred sync option per Mutation.

2013-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629508#comment-13629508
 ] 

Hadoop QA commented on HBASE-7801:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12578283/7801-0.96-v10.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 159 
new or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces lines longer than 
100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.TestFullLogReconstruction

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5275//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5275//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5275//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5275//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5275//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5275//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5275//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5275//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5275//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5275//console

This message is automatically generated.

> Allow a deferred sync option per Mutation.
> --
>
> Key: HBASE-7801
> URL: https://issues.apache.org/jira/browse/HBASE-7801
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.94.6, 0.95.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.98.0, 0.94.7, 0.95.1
>
> Attachments: 7801-0.94-v1.txt, 7801-0.94-v2.txt, 7801-0.94-v3.txt, 
> 7801-0.96-full-v2.txt, 7801-0.96-full-v3.txt, 7801-0.96-full-v4.txt, 
> 7801-0.96-full-v5.txt, 7801-0.96-v10.txt, 7801-0.96-v1.txt, 7801-0.96-v6.txt, 
> 7801-0.96-v7.txt, 7801-0.96-v8.txt, 7801-0.96-v9.txt
>
>
> Won't have time for parent. But a deferred sync option on a per operation 
> basis comes up quite frequently.
> In 0.96 this can be handled cleanly via protobufs and 0.94 we can have a 
> special mutation attribute.
> For batch operation we'd take the safest sync option of any of the mutations. 
> I.e. if there is at least one that wants to be flushed we'd sync the batch, 
> if there's none of those but at least one that wants deferred flush we defer 
> flush the batch, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   3   >