[jira] [Commented] (HBASE-22134) JIT deoptimization in Cell.write

2019-04-01 Thread Todd Lipcon (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16807077#comment-16807077
 ] 

Todd Lipcon commented on HBASE-22134:
-

Yea, I think I meant -XX:+PrintCompilation. Also -XX:+TraceDeoptimization or 
maybe -XX:+DebugDeoptimization would be useful for this case. Not entirely 
certain which one was spouting the above.

Apparently the folks at FB have hit this issue in the context of Presto. 
Discussing with them here: 
https://groups.google.com/d/topic/presto-users/RCVd_UVMW5I/discussion

bq. This probably happens all over the codebase I'm guessing

Not sure of that -- when this happens the performance degradation is pretty 
dramatic. I think someone would probably have spotted it.


> JIT deoptimization in Cell.write
> 
>
> Key: HBASE-22134
> URL: https://issues.apache.org/jira/browse/HBASE-22134
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 2.0.2
>Reporter: Todd Lipcon
>Priority: Major
>
> I was looking at a profile of a workload which was running compaction very 
> slowly, and saw that the top CPU consumers were from JVM internals regarding 
> deoptimization. I managed to write a little systemtap script to extract the 
> deoptimization log and got the following in a tight loop:
> "Uncommon trap: trap_request=0xff67 fr.pc=0x7f85bcdb8644"
> "Uncommon trap: reason=unstable_if action=none pc=0x7f85bcdb8644 
> method=org.apache.hadoop.hbase.io.encoding.NoneEncoder.write(Lorg/apache/hadoop/hbase/Cell;)I
>  @ 67"
> "DEOPT PACKING pc=0x7f85bcdb8644 sp=0x7f84d3d83080"
> "DEOPT UNPACKING pc=0x7f85b5005229 sp=0x7f84d3d82f30 mode 2"
> The java stack is spending most of its time at:
>   java.lang.Thread.State: RUNNABLE
>   at 
> org.apache.hadoop.hbase.io.encoding.NoneEncoder.write(NoneEncoder.java:57)
>   at 
> org.apache.hadoop.hbase.io.hfile.NoOpDataBlockEncoder.encode(NoOpDataBlockEncoder.java:55)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.write(HFileBlock.java:983)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.append(HFileWriterImpl.java:740)
> This was with Oracle JDK 1.8.0_112. Likely a JDK bug but perhaps some 
> reorganization of this code path could help avoid triggering the bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22134) JIT deoptimization in Cell.write

2019-03-29 Thread Todd Lipcon (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16805633#comment-16805633
 ] 

Todd Lipcon commented on HBASE-22134:
-

Looks a lot like what's reported here: 
http://openjdk.5641.n7.nabble.com/Deoptimization-taking-up-most-CPU-cycles-td274012.html

> JIT deoptimization in Cell.write
> 
>
> Key: HBASE-22134
> URL: https://issues.apache.org/jira/browse/HBASE-22134
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 2.0.2
>Reporter: Todd Lipcon
>Priority: Major
>
> I was looking at a profile of a workload which was running compaction very 
> slowly, and saw that the top CPU consumers were from JVM internals regarding 
> deoptimization. I managed to write a little systemtap script to extract the 
> deoptimization log and got the following in a tight loop:
> "Uncommon trap: trap_request=0xff67 fr.pc=0x7f85bcdb8644"
> "Uncommon trap: reason=unstable_if action=none pc=0x7f85bcdb8644 
> method=org.apache.hadoop.hbase.io.encoding.NoneEncoder.write(Lorg/apache/hadoop/hbase/Cell;)I
>  @ 67"
> "DEOPT PACKING pc=0x7f85bcdb8644 sp=0x7f84d3d83080"
> "DEOPT UNPACKING pc=0x7f85b5005229 sp=0x7f84d3d82f30 mode 2"
> The java stack is spending most of its time at:
>   java.lang.Thread.State: RUNNABLE
>   at 
> org.apache.hadoop.hbase.io.encoding.NoneEncoder.write(NoneEncoder.java:57)
>   at 
> org.apache.hadoop.hbase.io.hfile.NoOpDataBlockEncoder.encode(NoOpDataBlockEncoder.java:55)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.write(HFileBlock.java:983)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.append(HFileWriterImpl.java:740)
> This was with Oracle JDK 1.8.0_112. Likely a JDK bug but perhaps some 
> reorganization of this code path could help avoid triggering the bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22134) JIT deoptimization in Cell.write

2019-03-29 Thread Todd Lipcon (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16805632#comment-16805632
 ] 

Todd Lipcon commented on HBASE-22134:
-

For future reference, the stap script I used was:

{code}
probe process("/usr/lib64/libc-2.17.so").function("_IO_vsnprintf").return {
printf("%s\n", user_string_n_quoted(@entry($string), $return));
}
{code}

(because I noticed that there was a call to vsnprintf in the deoptimization 
path, I guessed it was log-message-related, and bingo!). Likely you could get 
the same info out of LogCompilation but that flag can't be switched at runtime.

> JIT deoptimization in Cell.write
> 
>
> Key: HBASE-22134
> URL: https://issues.apache.org/jira/browse/HBASE-22134
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 2.0.2
>Reporter: Todd Lipcon
>Priority: Major
>
> I was looking at a profile of a workload which was running compaction very 
> slowly, and saw that the top CPU consumers were from JVM internals regarding 
> deoptimization. I managed to write a little systemtap script to extract the 
> deoptimization log and got the following in a tight loop:
> "Uncommon trap: trap_request=0xff67 fr.pc=0x7f85bcdb8644"
> "Uncommon trap: reason=unstable_if action=none pc=0x7f85bcdb8644 
> method=org.apache.hadoop.hbase.io.encoding.NoneEncoder.write(Lorg/apache/hadoop/hbase/Cell;)I
>  @ 67"
> "DEOPT PACKING pc=0x7f85bcdb8644 sp=0x7f84d3d83080"
> "DEOPT UNPACKING pc=0x7f85b5005229 sp=0x7f84d3d82f30 mode 2"
> The java stack is spending most of its time at:
>   java.lang.Thread.State: RUNNABLE
>   at 
> org.apache.hadoop.hbase.io.encoding.NoneEncoder.write(NoneEncoder.java:57)
>   at 
> org.apache.hadoop.hbase.io.hfile.NoOpDataBlockEncoder.encode(NoOpDataBlockEncoder.java:55)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.write(HFileBlock.java:983)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.append(HFileWriterImpl.java:740)
> This was with Oracle JDK 1.8.0_112. Likely a JDK bug but perhaps some 
> reorganization of this code path could help avoid triggering the bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22134) JIT deoptimization in Cell.write

2019-03-29 Thread Todd Lipcon (JIRA)
Todd Lipcon created HBASE-22134:
---

 Summary: JIT deoptimization in Cell.write
 Key: HBASE-22134
 URL: https://issues.apache.org/jira/browse/HBASE-22134
 Project: HBase
  Issue Type: Bug
  Components: Performance
Affects Versions: 2.0.2
Reporter: Todd Lipcon


I was looking at a profile of a workload which was running compaction very 
slowly, and saw that the top CPU consumers were from JVM internals regarding 
deoptimization. I managed to write a little systemtap script to extract the 
deoptimization log and got the following in a tight loop:
"Uncommon trap: trap_request=0xff67 fr.pc=0x7f85bcdb8644"
"Uncommon trap: reason=unstable_if action=none pc=0x7f85bcdb8644 
method=org.apache.hadoop.hbase.io.encoding.NoneEncoder.write(Lorg/apache/hadoop/hbase/Cell;)I
 @ 67"
"DEOPT PACKING pc=0x7f85bcdb8644 sp=0x7f84d3d83080"
"DEOPT UNPACKING pc=0x7f85b5005229 sp=0x7f84d3d82f30 mode 2"

The java stack is spending most of its time at:
  java.lang.Thread.State: RUNNABLE
at 
org.apache.hadoop.hbase.io.encoding.NoneEncoder.write(NoneEncoder.java:57)
at 
org.apache.hadoop.hbase.io.hfile.NoOpDataBlockEncoder.encode(NoOpDataBlockEncoder.java:55)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.write(HFileBlock.java:983)
at 
org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.append(HFileWriterImpl.java:740)

This was with Oracle JDK 1.8.0_112. Likely a JDK bug but perhaps some 
reorganization of this code path could help avoid triggering the bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-06-25 Thread Todd Lipcon (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522725#comment-16522725
 ] 

Todd Lipcon commented on HBASE-20403:
-

I would guess it's not affected because it has locking in the file reader path. 
The locking was removed by HBASE-17917 in 2.0

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.2
>
> Attachments: hbase-20403.patch, hbase-20403.patch
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-06-25 Thread Todd Lipcon (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HBASE-20403:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.0.2
   2.1.0
   Status: Resolved  (was: Patch Available)

Committed to master, branch-2, branch-2.1, branch-2.0. Appears my commit access 
still works after 6 years! Thanks for the review.

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.2
>
> Attachments: hbase-20403.patch, hbase-20403.patch
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-06-25 Thread Todd Lipcon (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522682#comment-16522682
 ] 

Todd Lipcon commented on HBASE-20403:
-

Got a report back from an internal test cluster who was previously reproducing 
this issue. With this patch applied the issue seems to be resolved.

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: hbase-20403.patch, hbase-20403.patch
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-06-22 Thread Todd Lipcon (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520694#comment-16520694
 ] 

Todd Lipcon commented on HBASE-20403:
-

Sure, I filed HADOOP-15557

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: hbase-20403.patch, hbase-20403.patch
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-06-21 Thread Todd Lipcon (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519874#comment-16519874
 ] 

Todd Lipcon commented on HBASE-20403:
-

OK. New revision fixes the checkstyle. If someone out there knows how to 
reproduce the originally-reported issue and can check that this patch fixes it, 
that would be great confirmation that there isn't another issue lurking.

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: hbase-20403.patch, hbase-20403.patch
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-06-21 Thread Todd Lipcon (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HBASE-20403:

Attachment: hbase-20403.patch

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: hbase-20403.patch, hbase-20403.patch
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-06-20 Thread Todd Lipcon (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518879#comment-16518879
 ] 

Todd Lipcon commented on HBASE-20403:
-

bq. even though the underlying, wrapped DFSInputStream seems mostly thread-safe

That's an interesting point. I just looked at DFSInputStream and sure enough 
these non-positional methods are marked synchronized. However, it's somewhat 
odd because you'd still need some external synchronization to know where you're 
reading from. That is to say, if one thread is doing a 'seek, then read' at the 
same time as the other, they could interleave and one thread reads from the 
other thread's position.

That said, I could see the synchronization of DFSInputStream hiding bugs -- 
maybe it happens that sometimes one thread reads the data meant for another 
thread and just proceeds having read the wrong block. Or, it reads the wrong 
data because of the race, sees it as an HBase-level checksum failure, and 
performs a retry. In the Crypto case, because the input stream is not 
synchronized, it now ends up in a crash or an odd exception instead of "just 
reading the wrong data".

I think on the HDFS side (Hadoop common, really), we should add some sanity 
checking to prevent concurrent use and throw ConcurrentModificationException 
when we detect it so such bugs are obvious in the future instead of being very 
difficult to diagnose.

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: hbase-20403.patch
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-06-20 Thread Todd Lipcon (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518684#comment-16518684
 ] 

Todd Lipcon commented on HBASE-20403:
-

I'm guessing maybe this is partially due to HBASE-17917. It seems there used to 
be a lock  {{streamLock}} inside HFileBlock which prevented multiple threads 
from doing non-positional reads on top of each other. That patch basically 
removed usage of this lock except for in "unbuffer", but now it no longer 
really protects anything (except perhaps multiple unbuffers from each other?). 
I'm not sure if there are other cases where non-preads are used but worth 
auditing to make sure my patch here isn't too shallow.

Also worth noting that moving from streaming to positional probably means that 
the prefetching will be slower.

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: hbase-20403.patch
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-06-20 Thread Todd Lipcon (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon reassigned HBASE-20403:
---

Assignee: Todd Lipcon  (was: Umesh Agashe)

Attached patch has a test which reproduces a failure even on non-encrypted 
systems. Changing the prefetch to use pread instead of streaming read seems to 
fix the issue. We should verify that it also fixes it on encrypted filesystems.

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: hbase-20403.patch
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-06-20 Thread Todd Lipcon (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HBASE-20403:

Status: Patch Available  (was: Open)

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: hbase-20403.patch
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-06-20 Thread Todd Lipcon (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HBASE-20403:

Attachment: hbase-20403.patch

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: hbase-20403.patch
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-06-19 Thread Todd Lipcon (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517805#comment-16517805
 ] 

Todd Lipcon commented on HBASE-20403:
-

Hello from the peanut gallery!

Looking at the implementation of prefetch, it seems like the prefetch task 
scheduled on a separate thread calls readBlock() on the HFileReaderImpl even 
though there might be concurrent calls from the main (scanner) thread. It calls 
readBlock() with pread == false, which means that it ends up screwing with the 
file position, buffers, and underlying codec from the main thread. Seems like 
that could easily cause invalid data reads, weird buffer offsets, and crypto 
library crashes (due to concurrent usage of the same cipher).

Am I mis-remembering the thread safety guarantees of HFileReader? I had thought 
it was not meant to be thread-safe, but the prefetching is basically 
multi-threaded access to a single instance.

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
>Priority: Major
> Fix For: 3.0.0
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-16583) Staged Event-Driven Architecture

2016-09-09 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15477931#comment-15477931
 ] 

Todd Lipcon commented on HBASE-16583:
-

http://matt-welsh.blogspot.com/2010/07/retrospective-on-seda.html is worth a 
read on this (from the original author of SEDA)

> Staged Event-Driven Architecture
> 
>
> Key: HBASE-16583
> URL: https://issues.apache.org/jira/browse/HBASE-16583
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Phil Yang
>
> Staged Event-Driven Architecture (SEDA) splits request-handling logic into 
> several stages, each stage is executed in a thread pool and they are 
> connected by queues.
> Currently, in region server we use a thread pool to handle requests from 
> client. The number of handlers is configurable, reading and writing use 
> different pools. The current architecture has two limitations:
> Performance:
> Different part of the handling path has different bottleneck. For example, 
> accessing MemStore and cache mainly consumes CPU but accessing HDFS mainly 
> consumes network/disk IO. If we use SEDA and split them into two different 
> stages, we can use different numbers for two pools according to the 
> CPU/disk/network performance case by case.
> Availability:
> HBASE-16388 described a scene that if the client use a thread pool and use 
> blocking methods to access region servers, only one slow server may exhaust 
> most of threads of the client. For HBase, we are the client and HDFS 
> datanodes are the servers. A slow datanode may exhaust most of handlers. The 
> best way to resolve this issue is make HDFS requests non-blocking, which is 
> exactly what SEDA does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-3680) Publish more metrics about mslab

2016-05-16 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon reassigned HBASE-3680:
--

Assignee: (was: Todd Lipcon)

> Publish more metrics about mslab
> 
>
> Key: HBASE-3680
> URL: https://issues.apache.org/jira/browse/HBASE-3680
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.90.1
>Reporter: Jean-Daniel Cryans
> Fix For: 0.92.3
>
> Attachments: hbase-3680.txt, hbase-3680.txt
>
>
> We have been using mslab on all our clusters for a while now and it seems it 
> tends to OOME or send us into GC loops of death a lot more than it used to. 
> For example, one RS with mslab enabled and 7GB of heap died out of OOME this 
> afternoon; it had .55GB in the block cache and 2.03GB in the memstores which 
> doesn't account for much... but it could be that because of mslab a lot of 
> space was lost in those incomplete 2MB blocks and without metrics we can't 
> really tell. Compactions were running at the time of the OOME and I see block 
> cache activity. The average load on that cluster is 531.
> We should at least publish the total size of all those blocks and maybe even 
> take actions based on that (like force flushing).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14864) Add support for bucketing of keys into client library

2015-11-20 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15018796#comment-15018796
 ] 

Todd Lipcon commented on HBASE-14864:
-

Interesting proposal. Can you give an example or two of the key transformation 
you're suggesting? Typically "bucket" means some kind of hash calculation, but 
from the description it sounds more like you're applying some "round to nearest 
5 minute interval"?

> Add support for bucketing of keys into client library
> -
>
> Key: HBASE-14864
> URL: https://issues.apache.org/jira/browse/HBASE-14864
> Project: HBase
>  Issue Type: New Feature
>  Components: Client
>Reporter: Lars George
>
> This has been discussed and taught so many times, I believe it is time to 
> support it properly. The idea is to be able to assign an optional _bucketing_ 
> strategy to a table, which translates the user given row keys into a bucketed 
> version. This is done by either simple count, or by parts of the key. 
> Possibly some simple functionality should help _compute_ bucket keys. 
> For example, given a key {{\-\--...}} you could 
> imagine that a rule can be defined that takes the _epoch_ part and chunks it 
> into, for example, 5 minute buckets. This allows to store small time series 
> together and make reading (especially over many servers) much more efficient.
> The client also supports the proper scan logic to fan a scan over the buckets 
> as needed. There may be an executor service (implicitly or explicitly 
> provided) that is used to fetch the original data with user visible ordering 
> from the distributed buckets. 
> Note that this has been attempted a few times to various extends out in the 
> field, but then withered away. This is an essential feature that when present 
> in the API will make users consider this earlier, instead of when it is too 
> late (when hot spotting occurs for example).
> The selected bucketing strategy and settings could be stored in the table 
> descriptor key/value pairs. This will allow any client to observe the 
> strategy transparently. If not set the behaviour is the same as today, so the 
> new feature is not touching any critical path in terms of code, and is fully 
> client side. (But could be considered for say UI support as well - if needed).
> The strategies are pluggable using classes, but a few default implementations 
> are supplied.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13382) IntegrationTestBigLinkedList should use SecureRandom

2015-04-01 Thread Todd Lipcon (JIRA)
Todd Lipcon created HBASE-13382:
---

 Summary: IntegrationTestBigLinkedList should use SecureRandom
 Key: HBASE-13382
 URL: https://issues.apache.org/jira/browse/HBASE-13382
 Project: HBase
  Issue Type: Bug
  Components: integration tests
Reporter: Todd Lipcon


IntegrationTestBigLinkedList currently uses java.util.Random to generate its 
random keys. The keys are 128 bits long, but we generate them using 
Random.nextBytes(). The Random implementation itself only has a 48-bit seed, so 
even though we have a very long key string, it doesn't have anywhere near that 
amount of entropy.

This means that after a few billion rows, it's quite likely to run into a 
collision:  filling in a 16-byte key is equivalent to four calls to 
rand.nextInt(). So, for 10B rows, we are cycling through 40B different 'seed' 
values. With a 48-bit seed, it's quite likely we'll end up using the same seed 
twice, after which point any future rows generated by the colliding mappers are 
going to be equal. This results in broken chains and a failed verification job.

The fix is simple -- we should use SecureRandom to generate the random keys, 
instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11710) if offheap L2 blockcache, read from DFS into offheap and copy to blockcache without coming onheap

2014-10-17 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175839#comment-14175839
 ] 

Todd Lipcon commented on HBASE-11710:
-

Adding the ByteBuffer-based pread was always planned in HDFS -- we just didn't 
get to it. But I'm sure we can get someone to do the patch it would be nice to 
have :)

 if offheap L2 blockcache, read from DFS into offheap and copy to blockcache 
 without coming onheap
 -

 Key: HBASE-11710
 URL: https://issues.apache.org/jira/browse/HBASE-11710
 Project: HBase
  Issue Type: Brainstorming
  Components: BlockCache, Offheaping, Performance
Reporter: stack
Assignee: stack
Priority: Critical

 Filing issue assigned to myself to take a look at feasibility of our doing 
 this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11927) If java7, use zip crc

2014-09-12 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132492#comment-14132492
 ] 

Todd Lipcon commented on HBASE-11927:
-

Using the native verification as it stands today is a little bit difficult -- 
we don't expose the normal Checksum interface for the native checksum-- only 
the chunked sums format which is used by HDFS. HADOOP-10859 is probably what 
you need here.

BTW, when I tried to use the zlib crc in Java7 I found it wasn't any faster 
than Java6. Code inspection also showed that the implementation was the same, 
at least in OpenJDK. So I'm not sure this is beneficial in most environments.

 If java7, use zip crc
 -

 Key: HBASE-11927
 URL: https://issues.apache.org/jira/browse/HBASE-11927
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.99.1

 Attachments: c2021.crc2.svg, c2021.write.2.svg, c2021.zip.svg, 
 crc32ct.svg


 Up in hadoop they have this change. Let me publish some graphs to show that 
 it makes a difference (CRC is a massive amount of our CPU usage in my 
 profiling of an upload because of compacting, flushing, etc.).  We should 
 also make use of native CRCings -- especially the 2.6 HDFS-6865 and ilk -- in 
 hbase but that is another issue for now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-11945) Client writes may be reordered under contention

2014-09-10 Thread Todd Lipcon (JIRA)
Todd Lipcon created HBASE-11945:
---

 Summary: Client writes may be reordered under contention
 Key: HBASE-11945
 URL: https://issues.apache.org/jira/browse/HBASE-11945
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.6
Reporter: Todd Lipcon


I haven't seen this bug in practice, but I was thinking about this a bit and 
think there may be a correctness issue with the way that we handle client 
batches which contain multiple operations which touch the same row. The client 
expects that these operations will be performed in the same order they were 
submitted, but under contention I believe they can get arbitrarily reordered, 
leading to incorrect results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11945) Client writes may be reordered under contention

2014-09-10 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129502#comment-14129502
 ] 

Todd Lipcon commented on HBASE-11945:
-

The potential interleaving is:

Client 1: issues a batch with 2000 puts: Put row1, cf:col1, {0...1000}, Put 
row2, cf:col1, {0...1000}
Client 2: issues a batch with 1 put: Put row2, cf:col2, x
(ie same row, different column)

These two clients will contend for the same row lock. The minibatch code path 
iterates through the batch trying to acquire locks, and skipping the operations 
for a later pass if the lock is not available. So, I think these may interleave 
as follows:

C1: acquires lock for row1, and is in the process of iterating over the rest of 
the row1 operations
C2: acquires lock for row2, and is in the process of actually applying the 
operation to MemStore, etc
C1: fails to acquire the lock for the first row2 op, since row1 already has it. 
But, there are still 999 more row2 ops to iterate over
C2: commits its row2 operation, releasing the lock
C1: manages to acquire the lock for a later row2 op (eg the put of row2, 
cf:col1, 500
C1: commits the minibatch

Now it is easy to see that C1 has committed its put of 500 before other puts 
which came earlier from the client.

This re-ordering is unexpected from C1's point of view, since when it later 
reads the row, something other than the latest data might persist (eg the 
1000th put it did might actually have gotten executed first instead of last). 
The problem's worse with a delete/insert sequence, when you have a 50% chance 
of ending up with a deleted row at the end.

I haven't tried to produce this bug, but I think you could build a functional 
test as follows:

T1: writes batches with 1000 puts (arbitrary contnets) to row1 and 1000 puts 
to row2 (increasing integers)
T2: writes non-batched writes to a different column of row2
T3: read row2 in a loop and verify that the integer column is never seen to 
decrease.

1000 might not be large enough batches to reliably reproduce it, but I bet you 
could get this to fail eventually.

 Client writes may be reordered under contention
 ---

 Key: HBASE-11945
 URL: https://issues.apache.org/jira/browse/HBASE-11945
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.6
Reporter: Todd Lipcon

 I haven't seen this bug in practice, but I was thinking about this a bit and 
 think there may be a correctness issue with the way that we handle client 
 batches which contain multiple operations which touch the same row. The 
 client expects that these operations will be performed in the same order they 
 were submitted, but under contention I believe they can get arbitrarily 
 reordered, leading to incorrect results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11945) Client writes may be reordered under contention

2014-09-10 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129583#comment-14129583
 ] 

Todd Lipcon commented on HBASE-11945:
-

bq. Consider we are writing to same rowkey from a lot of map tasks running 
across multiple nodes. 
bq. Will the behavior be the serial writes?

The behavior has always been somewhat serial writes if you're writing to the 
same row key. There are some optimizations here around early lock release so 
the throughput is good, but they're definitely serialized if they're in the 
same row.

This issue is about a case where you might have two writes from the same client 
in the same batch getting re-ordered when they get committed.

bq. Would the above code prevent lock acquisition ?
hmm.. good catch, yea, maybe the 'break' actually prevents this issue. Does 
anyone have time to write the functional test I suggested above?

 Client writes may be reordered under contention
 ---

 Key: HBASE-11945
 URL: https://issues.apache.org/jira/browse/HBASE-11945
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.6
Reporter: Todd Lipcon

 I haven't seen this bug in practice, but I was thinking about this a bit and 
 think there may be a correctness issue with the way that we handle client 
 batches which contain multiple operations which touch the same row. The 
 client expects that these operations will be performed in the same order they 
 were submitted, but under contention I believe they can get arbitrarily 
 reordered, leading to incorrect results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5979) Non-pread DFSInputStreams should be associated with scanners, not HFile.Readers

2014-07-02 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050591#comment-14050591
 ] 

Todd Lipcon commented on HBASE-5979:


bq. DFSInputStream extends the above class and does not override the default 
implementation. You have no parallelism even for simple gets.

That's not true -- DFSInputStream definitely overrides the positional read 
method and provides parallelism.

{code}
  @Override
  public int read(long position, byte[] buffer, int offset, int length)
throws IOException {
// sanity checks
dfsClient.checkOpen();
...
{code}

(and it has since at least 5 years ago when I started working on Hadoop, as far 
as I can remember)

 Non-pread DFSInputStreams should be associated with scanners, not 
 HFile.Readers
 ---

 Key: HBASE-5979
 URL: https://issues.apache.org/jira/browse/HBASE-5979
 Project: HBase
  Issue Type: Improvement
  Components: Performance, regionserver
Reporter: Todd Lipcon

 Currently, every HFile.Reader has a single DFSInputStream, which it uses to 
 service all gets and scans. For gets, we use the positional read API (aka 
 pread) and for scans we use a synchronized block to seek, then read. The 
 advantage of pread is that it doesn't hold any locks, so multiple gets can 
 proceed at the same time. The advantage of seek+read for scans is that the 
 datanode starts to send the entire rest of the HDFS block, rather than just 
 the single hfile block necessary. So, in a single thread, pread is faster for 
 gets, and seek+read is faster for scans since you get a strong pipelining 
 effect.
 However, in a multi-threaded case where there are multiple scans (including 
 scans which are actually part of compactions), the seek+read strategy falls 
 apart, since only one scanner may be reading at a time. Additionally, a large 
 amount of wasted IO is generated on the datanode side, and we get none of the 
 earlier-mentioned advantages.
 In one test, I switched scans to always use pread, and saw a 5x improvement 
 in throughput of the YCSB scan-only workload, since it previously was 
 completely blocked by contention on the DFSIS lock.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11118) non environment variable solution for IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.Literal

2014-06-12 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14029336#comment-14029336
 ] 

Todd Lipcon commented on HBASE-8:
-

Have we already considered using our own classloader to isolate our dependency?

Forking PB into HBase makes me nervous from a maintenance perspective. PB does 
new releases once a year or so and they tend to have good improvements - aren't 
we going to end up stuck on a past version?

Instead, could we put some more pressure on the upstream protobuf maintainers 
to include the ZeroCopyLiteralByteString, or at least make its super-class 
non-final in order to support this?

Alternatively, could we get shading to work by adding an extra indirection 
package that builds a jar-with-dependencies of both hbase-protocol and 
protobuf, and then shades that and re-publishes as an 
hbase-protocol-with-shaded-pb pom? Then the rest of HBase could depend on 
that?

 non environment variable solution for IllegalAccessError: class 
 com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 --

 Key: HBASE-8
 URL: https://issues.apache.org/jira/browse/HBASE-8
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.2
Reporter: André Kelpe
Assignee: Andrew Purtell
Priority: Blocker
 Fix For: 0.99.0, 0.98.4

 Attachments: HBASE-8-0.98.patch.gz, HBASE-8-trunk.patch.gz, 
 shade_attempt.patch


 I am running into the problem described in 
 https://issues.apache.org/jira/browse/HBASE-10304, while trying to use a 
 newer version within cascading.hbase 
 (https://github.com/cascading/cascading.hbase).
 One of the features of cascading.hbase is that you can use it from lingual 
 (http://www.cascading.org/projects/lingual/), our SQL layer for hadoop. 
 lingual has a notion of providers, which are fat jars that we pull down 
 dynamically at runtime. Those jars give users the ability to talk to any 
 system or format from SQL. They are added to the classpath  programmatically 
 before we submit jobs to a hadoop cluster.
 Since lingual does not know upfront , which providers are going to be used in 
 a given run, the HADOOP_CLASSPATH trick proposed in the JIRA above is really 
 clunky and breaks the ease of use we had before. No other provider requires 
 this right now.
 It would be great to have a programmatical way to fix this, when using fat 
 jars.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10659) [89-fb] Optimize the threading model in HBase write path

2014-03-03 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918966#comment-13918966
 ] 

Todd Lipcon commented on HBASE-10659:
-

Curious: do you have just a single memstore update thread per region? Any 
results on whether throughput is better when the workload is skewed towards a 
single hot region on a server?

Are you doing any sorting of the batch before going into the memstore update 
thread? That might result in some better performance as well if you have hot 
and cold regions of keyspace.

 [89-fb] Optimize the threading model in HBase write path
 

 Key: HBASE-10659
 URL: https://issues.apache.org/jira/browse/HBASE-10659
 Project: HBase
  Issue Type: New Feature
Reporter: Liyin Tang

 Recently, we have done multiple prototypes to optimize the HBase (0.89)write 
 path. And based on the simulator results, the following model is able to 
 achieve much higher overall throughput with less threads.
 IPC Writer Threads Pool: 
 IPC handler threads will prepare all Put requests, and append the WALEdit, as 
 one transaction, into a concurrent collection with a read lock. And then just 
 return;
 HLogSyncer Thread:
 Each HLogSyncer thread is corresponding to one HLog stream. It swaps the 
 concurrent collection with a write lock, and then iterate over all the 
 elements in the previous concurrent collection, generate the sequence id for 
 each transaction, and write to HLog. After the HLog sync is done, append 
 these transactions as a batch into a blocking queue. 
 Memstore Update Thread:
 The memstore update thread will poll the blocking queue and update the 
 memstore for each transaction by using the sequence id as MVCC. Once the 
 memstore update is done, dispatch to the responder thread pool to return to 
 the client.
 Responder Thread Pool:
 Responder thread pool will return the RPC call in parallel. 
 We are still evaluating this model and will share more results/numbers once 
 it is ready. But really appreciate any comments in advance !



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10079) Race in TableName cache

2013-12-05 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13840485#comment-13840485
 ] 

Todd Lipcon commented on HBASE-10079:
-

Is Bytes.equals used with ByteBuffer arguments in any hot paths? You've added a 
new allocation here which may be costly if so. Perhaps using the version of 
get() which takes an index is better in that code path?

 Race in TableName cache
 ---

 Key: HBASE-10079
 URL: https://issues.apache.org/jira/browse/HBASE-10079
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.96.1
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
Priority: Blocker
 Fix For: 0.98.0, 0.96.1, 0.99.0

 Attachments: 10079.v1.patch, hbase-10079-addendum.patch, 
 hbase-10079.v2.patch


 Testing 0.96.1rc1.
 With one process incrementing a row in a table, we increment single col.  We 
 flush or do kills/kill-9 and data is lost.  flush and kill are likely the 
 same problem (kill would flush), kill -9 may or may not have the same root 
 cause.
 5 nodes
 hadoop 2.1.0 (a pre cdh5b1 hdfs).
 hbase 0.96.1 rc1 
 Test: 25 increments on a single row an single col with various number of 
 client threads (IncrementBlaster).  Verify we have a count of 25 after 
 the run (IncrementVerifier).
 Run 1: No fault injection.  5 runs.  count = 25. on multiple runs.  
 Correctness verified.  1638 inc/s throughput.
 Run 2: flushes table with incrementing row.  count = 246875 !=25.  
 correctness failed.  1517 inc/s throughput.  
 Run 3: kill of rs hosting incremented row.  count = 243750 != 25. 
 Correctness failed.   1451 inc/s throughput.
 Run 4: one kill -9 of rs hosting incremented row.  246878.!= 25.  
 Correctness failed. 1395 inc/s (including recovery)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-10079) Race in TableName cache

2013-12-05 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13840726#comment-13840726
 ] 

Todd Lipcon commented on HBASE-10079:
-

If only escape analysis actually worked :)

 Race in TableName cache
 ---

 Key: HBASE-10079
 URL: https://issues.apache.org/jira/browse/HBASE-10079
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.96.1
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
Priority: Blocker
 Fix For: 0.98.0, 0.96.1, 0.99.0

 Attachments: 10079.v1.patch, hbase-10079-addendum.patch, 
 hbase-10079.v2.patch


 Testing 0.96.1rc1.
 With one process incrementing a row in a table, we increment single col.  We 
 flush or do kills/kill-9 and data is lost.  flush and kill are likely the 
 same problem (kill would flush), kill -9 may or may not have the same root 
 cause.
 5 nodes
 hadoop 2.1.0 (a pre cdh5b1 hdfs).
 hbase 0.96.1 rc1 
 Test: 25 increments on a single row an single col with various number of 
 client threads (IncrementBlaster).  Verify we have a count of 25 after 
 the run (IncrementVerifier).
 Run 1: No fault injection.  5 runs.  count = 25. on multiple runs.  
 Correctness verified.  1638 inc/s throughput.
 Run 2: flushes table with incrementing row.  count = 246875 !=25.  
 correctness failed.  1517 inc/s throughput.  
 Run 3: kill of rs hosting incremented row.  count = 243750 != 25. 
 Correctness failed.   1451 inc/s throughput.
 Run 4: one kill -9 of rs hosting incremented row.  246878.!= 25.  
 Correctness failed. 1395 inc/s (including recovery)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-10052) use HDFS advisory caching to avoid caching HFiles that are not going to be read again (because they are being compacted)

2013-12-04 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839769#comment-13839769
 ] 

Todd Lipcon commented on HBASE-10052:
-

One thing to be wary of: _during_ the compaction, readers are still accessing 
the old files, so if you're compacting large files, this could really hurt read 
latency during compactions (assuming that people are relying on linux LRU in 
addition to hbase-internal LRU for performance).

In most cases, as soon as the compaction is complete, we end up removing the 
input files anyway (thus removing from cache), right? Or is that no longer the 
case now that we have snapshots?

 use HDFS advisory caching to avoid caching HFiles that are not going to be 
 read again (because they are being compacted)
 

 Key: HBASE-10052
 URL: https://issues.apache.org/jira/browse/HBASE-10052
 Project: HBase
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 0.98.0


 HBase can benefit from doing dropbehind during compaction since compacted 
 files are not read again.  HDFS advisory caching, introduced in HDFS-4817, 
 can help here.  The right API here is {{DataInputStream#setDropBehind}}.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-8323) Low hanging checksum improvements

2013-11-06 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13815068#comment-13815068
 ] 

Todd Lipcon commented on HBASE-8323:


You probably want to use it via DataChecksum, which is already a public class. 
It has the right logic to fallback to the Java implementation if the native one 
isn't available.

 Low hanging checksum improvements
 -

 Key: HBASE-8323
 URL: https://issues.apache.org/jira/browse/HBASE-8323
 Project: HBase
  Issue Type: Improvement
  Components: Performance
Reporter: Enis Soztutar

 Over at Hadoop land, [~tlipcon] had done some improvements for checksums, a 
 native implementation for CRC32C (HADOOP-7445) and bulk verify of checksums 
 (HADOOP-7444). 
 In HBase, we can do
  - Also develop a bulk verify API. Regardless of 
 hbase.hstore.bytes.per.checksum we always want to verify of the whole 
 checksum for the hfile block.
  - Enable NativeCrc32 to be used as a checksum algo. It is not clear how much 
 gain we can expect over pure java CRC32. 
 Though, longer term we should focus on convincing hdfs guys for inline 
 checksums (HDFS-2699)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9802) A new failover test framework for HBase

2013-10-31 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810871#comment-13810871
 ] 

Todd Lipcon commented on HBASE-9802:


Sounds similar to the gremlins project I hacked together a few years ago: 
http://github.com/toddlipcon/gremlins

I'm sure yours will be more extensive (gremlins was a quick hack), but worth 
checking out the above for some ideas.

 A new failover test framework for HBase
 ---

 Key: HBASE-9802
 URL: https://issues.apache.org/jira/browse/HBASE-9802
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 0.94.3
Reporter: chendihao
Priority: Minor

 Currently HBase uses ChaosMonkey for IT test and fault injection. It will 
 restart regionserver, force balancer and perform other actions randomly and 
 periodically. However, we need a more extensible and full-featured framework 
 for our failover test and we find ChaosMonkey cant' suit our needs since it 
 has the following drawbacks.
 1) Only process-level actions can be simulated, not support 
 machine-level/hardware-level/network-level actions.
 2) No data validation before and after the test, the fatal bugs such as that 
 can cause data inconsistent may be overlook.
 3) When failure occurs, we can't repro the problem and hard to figure out the 
 reason.
 Therefore, we have developed a new framework to satisfy the need of failover 
 test. We extended ChaosMonkey and implement the function to validate data and 
 to replay failed actions. Here are the features we add.
 1) Policy/Task/Action abstraction, seperating Task from Policy and Action 
 makes it easier to manage and replay a set of actions.
 2) Make action configurable. We have implemented some actions to cause 
 machine failure and defined the same interface as original actions.
 3) We should validate the date consistent before and after failover test to 
 ensure the availability and data correctness.
 4) After performing a set of actions, we also check the consistency of table 
 as well.
 5) The set of actions that caused test failure can be replayed, and the 
 reproducibility of actions can help fixing the exposed bugs.
 Our team has developed this framework and run for a while. Some bugs were 
 exposed and fixed by running this test framework. Moreover, we have a monitor 
 program which shows the progress of failover test and make sure our cluster 
 is as stable as we want. Now we are trying to make it more general and will 
 opensource it later.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9553) Pad HFile blocks to a fixed size before placing them into the blockcache

2013-09-17 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13770316#comment-13770316
 ] 

Todd Lipcon commented on HBASE-9553:


Interested to see the results here. When I tested block cache churn before, I 
didn't see heap fragmentation really crop up: 
http://blog.cloudera.com/blog/2011/02/avoiding-full-gcs-in-hbase-with-memstore-local-allocation-buffers-part-2/

For testing this improvement, it would be good to produce similar graphs of the 
CMS maximum chunk size metric from -XX:+PrintFLSStatistics output under some 
workload, and show that the improvement results in less fragmentation over time 
for at least some workload(s).

 Pad HFile blocks to a fixed size before placing them into the blockcache
 

 Key: HBASE-9553
 URL: https://issues.apache.org/jira/browse/HBASE-9553
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl

 In order to make it easy on the garbage collector and to avoid full 
 compaction phases we should make sure that all (or at least a large 
 percentage) of the HFile blocks as cached in the block cache are exactly the 
 same size.
 Currently an HFile block is typically slightly larger than the declared block 
 size, as the block will accommodate that last KV on the block. The padding 
 would be a ColumnFamily option. In many cases 100 bytes would probably be a 
 good value to make all blocks exactly the same size (but of course it depends 
 on the max size of the KVs).
 This does not have to be perfect. The more blocks evicted and replaced in the 
 block cache are of the exact same size the easier it should be on the GC.
 Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9467) write can be totally blocked temporarily by a write-heavy region

2013-09-13 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13766885#comment-13766885
 ] 

Todd Lipcon commented on HBASE-9467:


I don't have time to look at the patch itself, but one thing that comes to 
mind: is this a compatible change? Or do you need to add a new client parameter 
of some time that tells the server that it knows how to handle the new 
exception? It would be bad if existing clients all started getting some 
ClassNotFoundException trying to unwrap the new exception type, and actually 
failing rather than retrying. I don't know if that happens, but worth double 
checking (eg run YCSB with an old client against a new server)

 write can be totally blocked temporarily by a write-heavy region
 

 Key: HBASE-9467
 URL: https://issues.apache.org/jira/browse/HBASE-9467
 Project: HBase
  Issue Type: Improvement
Reporter: Feng Honghua
Assignee: Feng Honghua
 Attachments: HBASE-9467-trunk-v0.patch


 Write to a region can be blocked temporarily if the memstore of that region 
 reaches the threshold(hbase.hregion.memstore.block.multiplier * 
 hbase.hregion.flush.size) until the memstore of that region is flushed.
 For a write-heavy region, if its write requests saturates all the handler 
 threads of that RS when write blocking for that region occurs, requests of 
 other regions/tables to that RS also can't be served due to no available 
 handler threads...until the pending writes of that write-heavy region are 
 served after the flush is done. Hence during this time period, from the RS 
 perspective it can't serve any request from any table/region just due to a 
 single write-heavy region.
 This sounds not very reasonable, right? Maybe write requests from a region 
 can only be served by a sub-set of the handler threads, and then write 
 blocking of any single region can't lead to the scenario mentioned above?
 Comment?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9467) write can be totally blocked temporarily by a write-heavy region

2013-09-10 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763698#comment-13763698
 ] 

Todd Lipcon commented on HBASE-9467:


Rather than blocking writes to a region which is above its memstore limit, why 
not reject them with a RegionOverloadedException or somesuch? This way it 
wouldn't occupy threads needlessly, and avoid long queueing delays. The client 
could then perform some exponential backoff.

In the future, we could avoid the polling retry containing heavy batches of 
puts by changing the client to retry with a simple probe RPC until the region 
indicates that it's unblocked.

 write can be totally blocked temporarily by a write-heavy region
 

 Key: HBASE-9467
 URL: https://issues.apache.org/jira/browse/HBASE-9467
 Project: HBase
  Issue Type: Improvement
Reporter: Feng Honghua
Priority: Minor

 Write to a region can be blocked temporarily if the memstore of that region 
 reaches the threshold(hbase.hregion.memstore.block.multiplier * 
 hbase.hregion.flush.size) until the memstore of that region is flushed.
 For a write-heavy region, if its write requests saturates all the handler 
 threads of that RS when write blocking for that region occurs, requests of 
 other regions/tables to that RS also can't be served due to no available 
 handler threads...until the pending writes of that write-heavy region are 
 served after the flush is done. Hence during this time period, from the RS 
 perspective it can't serve any request from any table/region just due to a 
 single write-heavy region.
 This sounds not very reasonable, right? Maybe write requests from a region 
 can only be served by a sub-set of the handler threads, and then write 
 blocking of any single region can't lead to the scenario mentioned above?
 Comment?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9467) write can be totally blocked temporarily by a write-heavy region

2013-09-10 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763699#comment-13763699
 ] 

Todd Lipcon commented on HBASE-9467:


Sorry, hit submit too quickly... I meant to add that this approach is much 
better because it also helps with the problem of multiple different priority 
clients hitting the same region -- for example, a low priority MR job doing 
bulk puts into a region at the same time as a latency sensitive web app is 
doing single row requests. If you could set the watermarks for the low priority 
clients differently than high-priority clients, then rejecting the low priority 
ones and making them retry on the client side will leave room (both 
handler-wise and capacity wise) for the high priority ones to get in without 
sitting in lengthy RPC queues.

 write can be totally blocked temporarily by a write-heavy region
 

 Key: HBASE-9467
 URL: https://issues.apache.org/jira/browse/HBASE-9467
 Project: HBase
  Issue Type: Improvement
Reporter: Feng Honghua
Priority: Minor

 Write to a region can be blocked temporarily if the memstore of that region 
 reaches the threshold(hbase.hregion.memstore.block.multiplier * 
 hbase.hregion.flush.size) until the memstore of that region is flushed.
 For a write-heavy region, if its write requests saturates all the handler 
 threads of that RS when write blocking for that region occurs, requests of 
 other regions/tables to that RS also can't be served due to no available 
 handler threads...until the pending writes of that write-heavy region are 
 served after the flush is done. Hence during this time period, from the RS 
 perspective it can't serve any request from any table/region just due to a 
 single write-heavy region.
 This sounds not very reasonable, right? Maybe write requests from a region 
 can only be served by a sub-set of the handler threads, and then write 
 blocking of any single region can't lead to the scenario mentioned above?
 Comment?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8323) Low hanging checksum improvements

2013-05-25 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13667167#comment-13667167
 ] 

Todd Lipcon commented on HBASE-8323:


bq. Enable NativeCrc32 to be used as a checksum algo. It is not clear how much 
gain we can expect over pure java CRC32.

The gain's really big -- something around 10x CPU savings. Obviously that 
doesn't turn into a 10x improvement of HBase throughput, but I bet it would be 
substantial.

 Low hanging checksum improvements
 -

 Key: HBASE-8323
 URL: https://issues.apache.org/jira/browse/HBASE-8323
 Project: HBase
  Issue Type: Improvement
Reporter: Enis Soztutar

 Over at Hadoop land, [~tlipcon] had done some improvements for checksums, a 
 native implementation for CRC32C (HADOOP-7445) and bulk verify of checksums 
 (HADOOP-7444). 
 In HBase, we can do
  - Also develop a bulk verify API. Regardless of 
 hbase.hstore.bytes.per.checksum we always want to verify of the whole 
 checksum for the hfile block.
  - Enable NativeCrc32 to be used as a checksum algo. It is not clear how much 
 gain we can expect over pure java CRC32. 
 Though, longer term we should focus on convincing hdfs guys for inline 
 checksums (HDFS-2699)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7958) Statistics per-column family per-region

2013-02-27 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13589117#comment-13589117
 ] 

Todd Lipcon commented on HBASE-7958:


Before we get too much into the detail, can we clarify what kind of statistics 
we're interested in collecting in the first place? There are a bunch of 
different things we could collect, maybe it's good to enumerate some of them 
and list some of the potential applications of them before we get into the 
details of how they're implemented.

Here are a few of the places where I've considered adding statistics in the 
past -- though they fall into different buckets which not everyone might 
consider statistics :) :

- *Block heat* -- keep a reservoir sample of which rows in memstore have been 
read recently. When we flush the file, create a bitmap based on the sample 
mapping each flushed HFile block to its heat. These heat maps could be 
re-generated periodically based on block cache contents after the file is 
flushed. (something like 2 bits per HFile block would mean that the heat map 
for even a very large region could be re-written to disk in only a few MB). 
*Use case*: when we move a region to another server, it can effectively more 
effectively pre-warm its cache. 
- *Row key distribution* -- this seems to be the thing that people are talking 
about here mostly. Useful for calculating better split points for MR or region 
splits.
- *Row key cardinality* - useful for join ordering in SQL engines with 
optimizers
- *Column qualifier and cell value cardinality* - useful for join ordering as 
well as potentially automatic dictionary-coding?

There are bunches of others that could be brainstormed up... so my main point 
is: what do we mean by stats? How should we build this so that it's extensible 
and usable for future stats as well as whatever first one we want to implement?

 Statistics per-column family per-region
 ---

 Key: HBASE-7958
 URL: https://issues.apache.org/jira/browse/HBASE-7958
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.96.0
Reporter: Jesse Yates
 Fix For: 0.96.0


 Originating from this discussion on the dev list: 
 http://search-hadoop.com/m/coDKU1urovS/Simple+stastics+per+region/v=plain
 Essentially, we should have built-in statistics gathering for HBase tables. 
 This allows clients to have a better understanding of the distribution of 
 keys within a table and a given region. We could also surface this 
 information via the UI.
 There are a couple different proposals from the email, the overview is this:
 We add in something on compactions that gathers stats about the keys that are 
 written and then we surface them to a table.
 The possible proposals include:
 *How to implement it?*
 # Coprocessors - 
 ** advantage - it easily plugs in and people could pretty easily add their 
 own statistics. 
 ** disadvantage - UI elements would also require this, we get into dependent 
 loading, which leads down the OSGi path. Also, these CPs need to be installed 
 _after_ all the other CPs on compaction to ensure they see exactly what gets 
 written (doable, but a pain)
 # Built into HBase as a custom scanner
 ** advantage - always goes in the right place and no need to muck about with 
 loading CPs etc.
 ** disadvantage - less pluggable, at least for the initial cut
 *Where do we store data?*
 # .META.
 ** advantage - its an existing table, so we can jam it into another CF there
 ** disadvantage - this would make META much larger, possibly leading to 
 splits AND will make it much harder for other processes to read the info
 # A new stats table
 ** advantage - cleanly separates out the information from META
 ** disadvantage - should use a 'system table' idea to prevent accidental 
 deletion, manipulation by arbitrary clients, but still allow clients to read 
 it.
 Once we have this framework, we can then move to an actual implementation of 
 various statistics.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7608) Considering JDK8

2013-01-17 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13556518#comment-13556518
 ] 

Todd Lipcon commented on HBASE-7608:


What's an actual feasible timeline for anyone running JDK8? Most of the above 
improvements seem like they would _only_ be supported on JDK8 and no earlier 
versions... seems to me that we wouldn't be able to drop Java6 and Java7 
support for at least 3-4 more years at the earliest.

 Considering JDK8
 

 Key: HBASE-7608
 URL: https://issues.apache.org/jira/browse/HBASE-7608
 Project: HBase
  Issue Type: Umbrella
Reporter: Andrew Purtell
Priority: Trivial

 Musings (as subtasks) on experimental ideas for when JRE8 is a viable runtime.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7387) StoreScanner need to be able to be subclassed

2013-01-16 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13555714#comment-13555714
 ] 

Todd Lipcon commented on HBASE-7387:


But if they're private implementation details, then you should expect that they 
might change (eg we might find a new optimization or restructure the internals 
of the scanner). Then your coprocessor would break, whereas if you duplicated 
the logic (perhaps by sharing various utility code), it would be less fragile.

I havent looked at the CP hook recently -- do you have to provide a 
StoreScanner, or is there an upper level interface that you need to share? 
What's the shared functionality between your entirely different approach 
scanner and the usual implementation?

 StoreScanner need to be able to be subclassed
 -

 Key: HBASE-7387
 URL: https://issues.apache.org/jira/browse/HBASE-7387
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Affects Versions: 0.96.0
Reporter: Raymond Liu
Assignee: Raymond Liu
Priority: Minor
 Fix For: 0.96.0

 Attachments: HBASE_7387_v2.patch, StoreScanner.patch


 StoreScanner can be replaced by preStoreScannerOpen hook with CP. In order to 
 reuse most of the logic in current StoreScanner, subclass it might be the 
 best approaching. Thus a lot of private member need to be changed from 
 private to protected.
 At present, in order to to implement a custom storescanner for dot 
 (HBASE-6805), only a few of the private member need to be changed as in the 
 attached storescanner.patch, while should we change all the reasonable field 
 from private to protected as in HBASE-7387-v?.patch

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7387) StoreScanner need to be able to be subclassed

2013-01-15 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13554535#comment-13554535
 ] 

Todd Lipcon commented on HBASE-7387:


I'm a little skeptical of this -- we don't want to give any impression that 
these protected methods are public interfaces. Anyone subclassing from an HBase 
type that isn't a first class extension point deserves to have their code break 
between versions. What's the use case that can't be accomplished by delegation 
(which is generally a much safer design choice)?

 StoreScanner need to be able to be subclassed
 -

 Key: HBASE-7387
 URL: https://issues.apache.org/jira/browse/HBASE-7387
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Affects Versions: 0.96.0
Reporter: Raymond Liu
Assignee: Raymond Liu
Priority: Minor
 Fix For: 0.96.0

 Attachments: HBASE_7387_v2.patch, StoreScanner.patch


 StoreScanner can be replaced by preStoreScannerOpen hook with CP. In order to 
 reuse most of the logic in current StoreScanner, subclass it might be the 
 best approaching. Thus a lot of private member need to be changed from 
 private to protected.
 At present, in order to to implement a custom storescanner for dot 
 (HBASE-6805), only a few of the private member need to be changed as in the 
 attached storescanner.patch, while should we change all the reasonable field 
 from private to protected as in HBASE-7387-v?.patch

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7544) Transparent HFile encryption

2013-01-11 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13551589#comment-13551589
 ] 

Todd Lipcon commented on HBASE-7544:


I'm a little skeptical: why not do this at the HDFS layer?

 Transparent HFile encryption
 

 Key: HBASE-7544
 URL: https://issues.apache.org/jira/browse/HBASE-7544
 Project: HBase
  Issue Type: New Feature
  Components: HFile, io
Affects Versions: 0.96.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell

 Introduce transparent encryption of HBase on disk data.
 Depends on a separate contribution of an encryption codec framework to Hadoop 
 core and an AES-NI (native code) codec.
 I have an experimental patch that introduces encryption at the HFile level, 
 with all necessary changes propagated up to the HStore level. For the most 
 part, the changes are straightforward and mechanical. After HBASE-7414, we 
 can introduce specification of an optional encryption codec in the file 
 trailer. The work is not ready to go yet because key management and the HBCK 
 pieces are TBD.
 Requirements:
 - Transparent encryption at the CF or table level
 - Protect against all data leakage from files at rest
 - Two-tier key architecture for consistency with best practices for this 
 feature in the RDBMS world
 - Built-in key management
 - Flexible and non-intrusive key rotation
 - Mechanisms not exposed to or modifiable by users
 - Hardware security module integration (via Java KeyStore)
 - HBCK support for transparently encrypted files (+ plugin architecture for 
 HBCK)
 Additional goals:
 - Shell support for administrative functions
 - Avoid performance impact for the null crypto codec case
 - Play nicely with other changes underway: in HFile, block coding, etc.
 We're aiming for rough parity with Oracle's transparent tablespace encryption 
 feature, described in 
 http://www.oracle.com/technetwork/database/owp-security-advanced-security-11gr-133411.pdf
  as
 {quote}
 “Transparent Data Encryption uses a 2-tier key architecture for flexible and 
 non-intrusive key rotation and least operational and performance impact: Each 
 application table with at least one encrypted column has its own table key, 
 which is applied to all encrypted columns in that table. Equally, each 
 encrypted tablespace has its own tablespace key. Table keys are stored in the 
 data dictionary of the database, while tablespace keys are stored in the 
 header of the tablespace and additionally, the header of each underlying OS 
 file that makes up the tablespace.  Each of these keys is encrypted with the 
 TDE master encryption key, which is stored outside of the database in an 
 external security module: either the Oracle Wallet (a PKCS#12 formatted file 
 that is encrypted using a passphrase supplied either by the designated 
 security administrator or DBA during setup),  or a Hardware Security Module 
 (HSM) device for higher assurance […]”
 {quote}
 Further design details forthcoming in a design document and patch as soon as 
 we have all of the clearances in place.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7544) Transparent table/CF encryption

2013-01-11 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13551645#comment-13551645
 ] 

Todd Lipcon commented on HBASE-7544:


bq. Should we do compression at the HDFS layer?

IMO yes, probably :)

bq. Can you be more specific with what you have in mind? Say we have per CF 
keys and want to set up readers and writers to use them, what kind of provision 
would/could HDFS have for that?

I'll admit I missed the bit above about per-CF keys. That's a little odd, 
though, because the 'hbase' user would have to have to have access to all the 
keys, and that user is the only one who would have access to the on-disk files 
What's the threat model here?

 Transparent table/CF encryption
 ---

 Key: HBASE-7544
 URL: https://issues.apache.org/jira/browse/HBASE-7544
 Project: HBase
  Issue Type: New Feature
  Components: HFile, io
Affects Versions: 0.96.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell

 Introduce transparent encryption of HBase on disk data.
 Depends on a separate contribution of an encryption codec framework to Hadoop 
 core and an AES-NI (native code) codec. This is work done in the context of 
 MAPREDUCE-4491 but I'd gather there will be additional JIRAs for common and 
 HDFS parts of it.
 Requirements:
 - Transparent encryption at the CF or table level
 - Protect against all data leakage from files at rest
 - Two-tier key architecture for consistency with best practices for this 
 feature in the RDBMS world
 - Built-in key management
 - Flexible and non-intrusive key rotation
 - Mechanisms not exposed to or modifiable by users
 - Hardware security module integration (via Java KeyStore)
 - HBCK support for transparently encrypted files (+ plugin architecture for 
 HBCK)
 Additional goals:
 - Shell support for administrative functions
 - Avoid performance impact for the null crypto codec case
 - Play nicely with other changes underway: in HFile, block coding, etc.
 We're aiming for rough parity with Oracle's transparent tablespace encryption 
 feature, described in 
 http://www.oracle.com/technetwork/database/owp-security-advanced-security-11gr-133411.pdf
  as
 {quote}
 “Transparent Data Encryption uses a 2-tier key architecture for flexible and 
 non-intrusive key rotation and least operational and performance impact: Each 
 application table with at least one encrypted column has its own table key, 
 which is applied to all encrypted columns in that table. Equally, each 
 encrypted tablespace has its own tablespace key. Table keys are stored in the 
 data dictionary of the database, while tablespace keys are stored in the 
 header of the tablespace and additionally, the header of each underlying OS 
 file that makes up the tablespace.  Each of these keys is encrypted with the 
 TDE master encryption key, which is stored outside of the database in an 
 external security module: either the Oracle Wallet (a PKCS#12 formatted file 
 that is encrypted using a passphrase supplied either by the designated 
 security administrator or DBA during setup),  or a Hardware Security Module 
 (HSM) device for higher assurance […]”
 {quote}
 Further design details forthcoming in a design document and patch as soon as 
 we have all of the clearances in place.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7474) Endpoint Implementation to support Scans with Sorting of Rows based on column values(similar to order by clause of RDBMS)

2013-01-02 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13542549#comment-13542549
 ] 

Todd Lipcon commented on HBASE-7474:


What's the advantage of doing this, server side?

Server-side filtering and server-side aggregation make sense, because it 
reduces the amount of data that has to be read over the network. But, a server 
side sort will transfer the same amount of data - it just moves extra CPU cost 
from the client to the server, which IMO seems like the wrong direction.

 Endpoint Implementation to support Scans with Sorting of Rows based on column 
 values(similar to order by clause of RDBMS)
 ---

 Key: HBASE-7474
 URL: https://issues.apache.org/jira/browse/HBASE-7474
 Project: HBase
  Issue Type: New Feature
  Components: Coprocessors, Scanners
Affects Versions: 0.94.3
Reporter: Anil Gupta
Priority: Minor
  Labels: coprocessors, scan, sort
 Fix For: 0.94.4

 Attachments: SortingEndpoint_high_level_flowchart.pdf


 Recently, i have developed an Endpoint which can sort the Results(rows) on 
 the basis of column values. This functionality is similar to order by 
 clause of RDBMS. I will be submitting this Patch for HBase0.94.3
 I am almost done with the initial development and testing of feature. But, i 
 need to write the JUnits for this. I will also try to make design doc.
 Thanks,
 Anil Gupta
 Software Engineer II, Intuit, inc

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5945) Reduce buffer copies in IPC server response path

2012-12-20 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537592#comment-13537592
 ] 

Todd Lipcon commented on HBASE-5945:


Was reminded of this JIRA today. Anyone feel like picking it up? I'm a 
delinquent HBase contributor these days, so probably better if someone steals 
it from me (sorry guys!)

 Reduce buffer copies in IPC server response path
 

 Key: HBASE-5945
 URL: https://issues.apache.org/jira/browse/HBASE-5945
 Project: HBase
  Issue Type: Improvement
  Components: IPC/RPC
Affects Versions: 0.96.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Blocker
 Attachments: buffer-copies.txt, even-fewer-copies.txt, hbase-5495.txt


 The new PB code is sloppy with buffers and makes several needless copies. 
 This increases GC time a lot. A few simple changes can cut this back down.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7384) Introducing waitForCondition function into test cases

2012-12-18 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535552#comment-13535552
 ] 

Todd Lipcon commented on HBASE-7384:


Hadoop Common has GenericTestUtils#waitFor which basically does this (though 
not with a backoff).

 Introducing waitForCondition function into test cases
 -

 Key: HBASE-7384
 URL: https://issues.apache.org/jira/browse/HBASE-7384
 Project: HBase
  Issue Type: Test
  Components: test
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong

 Recently I'm working on flaky test cases and found we have many places using 
 while loop and sleep to wait for a condition to be true. There are several 
 issues in existing ways:
 1) Many similar code doing the same thing
 2) When time out happens, different errors are reported without explicitly 
 indicating a time out situation
 3) When we want to increase the max timeout value to verify if a test case 
 fails due to a not-enough time out value, we have to recompile  redeploy code
 I propose to create a waitForCondition function as a test utility function 
 like the following:
 {code}
 public interface WaitCheck {
 public boolean Check() ;
 }
 public boolean waitForCondition(int timeOutInMilliSeconds, int 
 checkIntervalInMilliSeconds, WaitCheck s)
 throws InterruptedException {
 int multiplier = 1;
 String multiplierProp = System.getProperty(extremeWaitMultiplier);
 if(multiplierProp != null) {
 multiplier = Integer.parseInt(multiplierProp);
 if(multiplier  1) {
 LOG.warn(String.format(Invalid extremeWaitMultiplier 
 property value:%s. is ignored., multiplierProp));
 multiplier = 1;
 }
 }
 int timeElapsed = 0;
 while(timeElapsed  timeOutInMilliSeconds * multiplier) {
 if(s.Check()) {
 return true;
 }
 Thread.sleep(checkIntervalInMilliSeconds);
 timeElapsed += checkIntervalInMilliSeconds;
 }
 assertTrue(WaitForCondition failed due to time out( + 
 timeOutInMilliSeconds +  milliseconds expired),
 false);
 return false;
 }
 {code}
 By doing the above way, there are several advantages:
 1) Clearly report time out error when such situation happens
 2) Use System property extremeWaitMultiplier to increase max time out 
 dynamically for a quick verification
 3) Standardize current wait situations
 Pleas let me know what your thoughts on this.
 Thanks,
 -Jeffrey

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-4443) optimize/avoid seeking to previous block when key you are interested in is the first one of a block

2012-12-14 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13532650#comment-13532650
 ] 

Todd Lipcon commented on HBASE-4443:


Hey Kannan. Thanks for the pointer and explanation. It all makes good sense. 
Given that other JIRA is committed for 0.96, can we now close this as dup?

 optimize/avoid seeking to previous block when key you are interested in is 
 the first one of a block
 -

 Key: HBASE-4443
 URL: https://issues.apache.org/jira/browse/HBASE-4443
 Project: HBase
  Issue Type: Improvement
Reporter: Kannan Muthukkaruppan

 This issue primarily affects cases when you are storing large blobs, i.e. 
 when blocks contain small number of keys, and the chances of the key you are 
 looking for being the first block of a key is higher.
 Say, you are looking for row/col, and row/col/ts=5 is the latest version 
 of the key in the HFile and is at the beginning of block X.
 The search for the key is done by looking for row/col/TS=Long.MAX_VAL, but 
 this will land us in block X-1 (because ts=Long.MAX_VAL sorts ahead of ts=5); 
 only to find that there is no matching row/col in block X-1, and then we'll 
 advance to block X to return the value.
 Seems like we should be able to optimize this somehow.
 Some possibilities:
 1) Suppose we track that the  file contains no deletes, and if the CF setting 
 has MAX_VERSIONS=1, we can know for sure that block X - 1 does not contain 
 any relevant data, and directly position the seek to block X. [This will also 
 require the memstore flusher to remove extra versions if MAX_VERSION=1 and 
 not allow the file to contain duplicate entries for the same ROW/COL.]  
 Tracking deletes will also avoid in many cases, the seek to the top of the 
 row to look for DeleteFamily.
 2) Have a dense index (1 entry per KV in the index; this might be ok for 
 large object case since index vs. data ratio will still be low).
 3) Have the index contain the last KV of each block also in addition to the 
 first KV. This doubles the size of the index though.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7317) server-side request problems are hard to debug

2012-12-12 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13530177#comment-13530177
 ] 

Todd Lipcon commented on HBASE-7317:


The license is already Apache, so if someone wants to make changes and send a 
pull request, I'm happy to pull them in and publish a new version of htrace. I 
don't think we need substantial changes to htrace itself - more work is 
remaining in the trace collection / viewing area.

 server-side request problems are hard to debug
 --

 Key: HBASE-7317
 URL: https://issues.apache.org/jira/browse/HBASE-7317
 Project: HBase
  Issue Type: Brainstorming
  Components: IPC/RPC, regionserver
Reporter: Sergey Shelukhin
Priority: Minor

 I've seen cases during integration tests where the write or read request took 
 an unexpectedly large amount of time (that, after the client went to the 
 region server that is reported alive and well, which I know from temporary 
 debug logging :)), and it's impossible to understand what is going on on the 
 server side, short of catching the moment with jstack.
 Some solutions (off by default) could be 
 - a facility for tests (especially integration tests) that would trace 
 Server/Master calls into some log or file (won't help with internals but at 
 least one could see what was actually received);
 - logging the progress of requests between components inside master/server 
 (e.g. request id=N received, request id=N is being processed in MyClass, 
 N being drawn on client from local sequence - no guarantees of uniqueness are 
 necessary).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7317) server-side request problems are hard to debug

2012-12-12 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13530273#comment-13530273
 ] 

Todd Lipcon commented on HBASE-7317:


We can't put it in org.apache.* unless it's an Apache project. If you want to 
submit it to the incubator as a project I would be interested in joining up, 
but our thinking at the time of development was that it's a small enough piece 
of code that it would be easier to just develop on github until it got traction 
in a bunch of projects.

There's no restriction that Apache projects only depend on other Apache 
projects - eg we depend on Google libraries like protobuf and guava.

 server-side request problems are hard to debug
 --

 Key: HBASE-7317
 URL: https://issues.apache.org/jira/browse/HBASE-7317
 Project: HBase
  Issue Type: Brainstorming
  Components: IPC/RPC, regionserver
Reporter: Sergey Shelukhin
Priority: Minor

 I've seen cases during integration tests where the write or read request took 
 an unexpectedly large amount of time (that, after the client went to the 
 region server that is reported alive and well, which I know from temporary 
 debug logging :)), and it's impossible to understand what is going on on the 
 server side, short of catching the moment with jstack.
 Some solutions (off by default) could be 
 - a facility for tests (especially integration tests) that would trace 
 Server/Master calls into some log or file (won't help with internals but at 
 least one could see what was actually received);
 - logging the progress of requests between components inside master/server 
 (e.g. request id=N received, request id=N is being processed in MyClass, 
 N being drawn on client from local sequence - no guarantees of uniqueness are 
 necessary).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7317) server-side request problems are hard to debug

2012-12-12 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13530403#comment-13530403
 ] 

Todd Lipcon commented on HBASE-7317:


I wouldn't want to put it in Hadoop common -- then we'd have to do elaborate 
stubbing in our compat code in order to use it while still supporting older 
versions. It is also useful for non-Hadoop projects (eg something like 
Cassandra)

 server-side request problems are hard to debug
 --

 Key: HBASE-7317
 URL: https://issues.apache.org/jira/browse/HBASE-7317
 Project: HBase
  Issue Type: Brainstorming
  Components: IPC/RPC, regionserver
Reporter: Sergey Shelukhin
Priority: Minor

 I've seen cases during integration tests where the write or read request took 
 an unexpectedly large amount of time (that, after the client went to the 
 region server that is reported alive and well, which I know from temporary 
 debug logging :)), and it's impossible to understand what is going on on the 
 server side, short of catching the moment with jstack.
 Some solutions (off by default) could be 
 - a facility for tests (especially integration tests) that would trace 
 Server/Master calls into some log or file (won't help with internals but at 
 least one could see what was actually received);
 - logging the progress of requests between components inside master/server 
 (e.g. request id=N received, request id=N is being processed in MyClass, 
 N being drawn on client from local sequence - no guarantees of uniqueness are 
 necessary).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7317) server-side request problems are hard to debug

2012-12-12 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13530424#comment-13530424
 ] 

Todd Lipcon commented on HBASE-7317:


bq. That's disappointing. Then my concern about depending on a project in this 
state stands.

What do you mean? If there are bugs in the code, feel free to submit patches, 
and I'm happy to integrate them (I have commit access to the repo). If we end 
up with several contributors, I don't foresee any issues proposing it for 
Apache incubation.

 server-side request problems are hard to debug
 --

 Key: HBASE-7317
 URL: https://issues.apache.org/jira/browse/HBASE-7317
 Project: HBase
  Issue Type: Brainstorming
  Components: IPC/RPC, regionserver
Reporter: Sergey Shelukhin
Priority: Minor

 I've seen cases during integration tests where the write or read request took 
 an unexpectedly large amount of time (that, after the client went to the 
 region server that is reported alive and well, which I know from temporary 
 debug logging :)), and it's impossible to understand what is going on on the 
 server side, short of catching the moment with jstack.
 Some solutions (off by default) could be 
 - a facility for tests (especially integration tests) that would trace 
 Server/Master calls into some log or file (won't help with internals but at 
 least one could see what was actually received);
 - logging the progress of requests between components inside master/server 
 (e.g. request id=N received, request id=N is being processed in MyClass, 
 N being drawn on client from local sequence - no guarantees of uniqueness are 
 necessary).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7317) server-side request problems are hard to debug

2012-12-11 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13529625#comment-13529625
 ] 

Todd Lipcon commented on HBASE-7317:


Indeed we already have the hooks for RPC. The thing we lack are interesting 
trace points, such as before going to DFS, etc. We also lack a good central 
collector to coalesce the trace info and display it usefully.

 server-side request problems are hard to debug
 --

 Key: HBASE-7317
 URL: https://issues.apache.org/jira/browse/HBASE-7317
 Project: HBase
  Issue Type: Brainstorming
  Components: IPC/RPC, regionserver
Reporter: Sergey Shelukhin
Priority: Minor

 I've seen cases during integration tests where the write or read request took 
 an unexpectedly large amount of time (that, after the client went to the 
 region server that is reported alive and well, which I know from temporary 
 debug logging :)), and it's impossible to understand what is going on on the 
 server side, short of catching the moment with jstack.
 Some solutions (off by default) could be 
 - a facility for tests (especially integration tests) that would trace 
 Server/Master calls into some log or file (won't help with internals but at 
 least one could see what was actually received);
 - logging the progress of requests between components inside master/server 
 (e.g. request id=N received, request id=N is being processed in MyClass, 
 N being drawn on client from local sequence - no guarantees of uniqueness are 
 necessary).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7233) Serializing KeyValues when passing them over RPC

2012-12-05 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13510732#comment-13510732
 ] 

Todd Lipcon commented on HBASE-7233:


For the RPC transport, I'd vote that we reuse some of the block encoder type 
stuff that we've got in HFile. That way we get prefix compression on the 
transport of a list of KVs within RPC, which should improve performance.

 Serializing KeyValues when passing them over RPC
 

 Key: HBASE-7233
 URL: https://issues.apache.org/jira/browse/HBASE-7233
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
Priority: Blocker
 Attachments: 7233.txt, 7233-v2.txt


 Undo KeyValue being a Writable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7213) Have HLog files for .META. edits only

2012-11-29 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13506730#comment-13506730
 ] 

Todd Lipcon commented on HBASE-7213:


I agree it seems to make sense to lump this with the multi-WAL work. Perhaps an 
interface like WALFactory or WALProvider, which, given a region name, gives 
back a WAL instance? The basic implementation would always provide the single 
WAL. Then, we could add the feature that returns a different WAL for META 
alone. More complex implementations could choose to give different tenants of a 
cluster separate WALs, etc.

 Have HLog files for .META. edits only
 -

 Key: HBASE-7213
 URL: https://issues.apache.org/jira/browse/HBASE-7213
 Project: HBase
  Issue Type: Improvement
  Components: master, regionserver
Reporter: Devaraj Das
Assignee: Devaraj Das
 Attachments: 7213-in-progress.patch


 Over on HBASE-6774, there is a discussion on separating out the edits for 
 .META. regions from the other regions' edits w.r.t where the edits are 
 written. This jira is to track an implementation of that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7213) Have HLog files for .META. edits only

2012-11-29 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13506953#comment-13506953
 ] 

Todd Lipcon commented on HBASE-7213:


Yea, I don't think you need to strictly sequence this after the multi-WAL work. 
But it would be nice to have the end goal in mind while doing this work. 
Sorry, haven't had time to look at the in-progress patch, but if there's a 
simple solution that works OK now, no sense blocking it for the perfect 
end-game solution later.

 Have HLog files for .META. edits only
 -

 Key: HBASE-7213
 URL: https://issues.apache.org/jira/browse/HBASE-7213
 Project: HBase
  Issue Type: Improvement
  Components: master, regionserver
Reporter: Devaraj Das
Assignee: Devaraj Das
 Attachments: 7213-in-progress.patch


 Over on HBASE-6774, there is a discussion on separating out the edits for 
 .META. regions from the other regions' edits w.r.t where the edits are 
 written. This jira is to track an implementation of that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5995) Fix and reenable TestLogRolling.testLogRollOnPipelineRestart

2012-11-16 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13499250#comment-13499250
 ] 

Todd Lipcon commented on HBASE-5995:


I'm remembering now that this was due to HDFS-2288, which got closed as 
invalid. But, I still think HDFS-2288 is valid, so I will do my best to revive 
it and convince other HDFS developers of that :)

 Fix and reenable TestLogRolling.testLogRollOnPipelineRestart
 

 Key: HBASE-5995
 URL: https://issues.apache.org/jira/browse/HBASE-5995
 Project: HBase
  Issue Type: Sub-task
  Components: test
Affects Versions: 0.96.0
Reporter: stack
Assignee: Todd Lipcon
Priority: Blocker
 Fix For: 0.96.0


 HBASE-5984 disabled this flakey test (See the issue for more).  This issue is 
 about getting it enabled again.  Made a blocker on 0.96.0 so it gets 
 attention.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7108) Don't allow recovered.edits as legal family name

2012-11-06 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13491924#comment-13491924
 ] 

Todd Lipcon commented on HBASE-7108:


Instead of disallowing it, could we change recovered.edits to be something 
starting with a '.'? I think we already have some requirement that CFs not 
start with '.', right? (if not, is there any other prefix which we've already 
disallowed for users?)

 Don't allow recovered.edits as legal family name
 --

 Key: HBASE-7108
 URL: https://issues.apache.org/jira/browse/HBASE-7108
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.92.2, 0.94.2, 0.96.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.96.0

 Attachments: HBASE-7108-v0.patch


 Region directories can contain a folder called recovered.edits, log 
 splitting related.
 But there's nothing that prevent a user to create a family with that name...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7108) Don't allow recovered.edits as legal family name

2012-11-06 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13491991#comment-13491991
 ] 

Todd Lipcon commented on HBASE-7108:


For compatibility, one option would be:
- in 0.94: change code to recognize both recovered.edits and 
.recovered.edits at region-open time
- in 0.96: change code to write to .recovered.edits and recognize both

Then, so long as someone is upgrading from the most recent 0.94 to 0.96, they'd 
be fine. An upgrade from an older 0.94 or 0.92 to 0.96 would potentially have 
an issue if there were a failure in the middle of a rolling upgrade.

Would that be acceptible?



 Don't allow recovered.edits as legal family name
 --

 Key: HBASE-7108
 URL: https://issues.apache.org/jira/browse/HBASE-7108
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.92.2, 0.94.2, 0.96.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.96.0

 Attachments: HBASE-7108-v0.patch


 Region directories can contain a folder called recovered.edits, log 
 splitting related.
 But there's nothing that prevent a user to create a family with that name...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7055) port HBASE-6371 tier-based compaction from 0.89-fb to trunk

2012-10-31 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487997#comment-13487997
 ] 

Todd Lipcon commented on HBASE-7055:


I'm guessing the separate config file is used to support runtime reloading of 
the tiering strategies? Putting it in a separate config means that users won't 
get confused into thinking that other properties in hbase-site are reloadable.

 port HBASE-6371 tier-based compaction from 0.89-fb to trunk
 ---

 Key: HBASE-7055
 URL: https://issues.apache.org/jira/browse/HBASE-7055
 Project: HBase
  Issue Type: Task
  Components: Compaction
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.96.0

 Attachments: HBASE-6371-squashed.patch, HBASE-6371-v2-squashed.patch


 There's divergence in the code :(
 See HBASE-6371 for details.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-3835) Switch web pages to Jamon template engine instead of JSP

2012-10-31 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13488139#comment-13488139
 ] 

Todd Lipcon commented on HBASE-3835:


It looks like our current LICENSE.txt doesn't mention jamon at all, which is an 
oversight. It seems we should be calling it out as Mozilla Public License in 
our NOTICE.txt.

 Switch web pages to Jamon template engine instead of JSP
 

 Key: HBASE-3835
 URL: https://issues.apache.org/jira/browse/HBASE-3835
 Project: HBase
  Issue Type: Improvement
  Components: master, regionserver
Affects Versions: 0.92.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 0.92.0

 Attachments: hbase-3835.txt, hbase-3835.txt, hbase-3835.txt


 Jamon (http://jamon.org) is a template engine that I think is preferable to 
 JSP. You can read an interview with some comparisons vs JSP here: 
 http://www.artima.com/lejava/articles/jamon.html
 In particular, I think it will give us the following advantages:
 - Since we'll have a servlet in front of each template, it will encourage us 
 to write less inline Java code and do more code in the servlets.
 - Makes proper unit testing easier since you can trivially render a template 
 and pass in mock arguments without having to start a whole HTTP stack
 - Static typing of template arguments makes it easier to know at compile-time 
 if you've made a mistake.
 Thoughts? I converted the Master UI yesterday and only took a couple hours.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7055) port HBASE-6371 tier-based compaction from 0.89-fb to trunk

2012-10-31 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13488216#comment-13488216
 ] 

Todd Lipcon commented on HBASE-7055:


+1 to separate out the compaction policy refactoring - that bit is nice.

 port HBASE-6371 tier-based compaction from 0.89-fb to trunk
 ---

 Key: HBASE-7055
 URL: https://issues.apache.org/jira/browse/HBASE-7055
 Project: HBase
  Issue Type: Task
  Components: Compaction
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.96.0

 Attachments: HBASE-6371-squashed.patch, HBASE-6371-v2-squashed.patch


 There's divergence in the code :(
 See HBASE-6371 for details.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-4254) Get tests passing on Hadoop 23

2012-10-30 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487250#comment-13487250
 ] 

Todd Lipcon commented on HBASE-4254:


This has been subsumed by other JIRAs, right?

 Get tests passing on Hadoop 23
 --

 Key: HBASE-4254
 URL: https://issues.apache.org/jira/browse/HBASE-4254
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 0.92.3

 Attachments: HBASE-4254-92.patch


 Currently some 30 or so tests are failing on the HBase-trunk-on-hadoop-23 
 build. It looks like most are reflection-based issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-3346) NPE in processRegionInTransition causing master abort

2012-10-30 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HBASE-3346.


Resolution: Cannot Reproduce

This has been quiet for a couple years, going to assume it's long gone.

 NPE in processRegionInTransition causing master abort
 -

 Key: HBASE-3346
 URL: https://issues.apache.org/jira/browse/HBASE-3346
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.90.0
Reporter: Todd Lipcon
Priority: Blocker

 After the abort in HBASE-3345, I restarted the master, and it crashed again 
 during startup
 2010-12-13 12:57:00,367 DEBUG 
 org.apache.hadoop.hbase.master.handler.ClosedRegionHandler: Handling CLOSED 
 event for c5005ca650c7e3bdbab4c8d3e9b7c618
 2010-12-13 12:57:00,367 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Table being disabled so 
 deleting ZK node and removing from regions in transition, skipping assignment 
 of region usertable
 2010-12-13 12:57:00,367 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: 
 master:6-0x12c780205200116 Deleting existing unassigned node for 
 c5005ca650c7e3bdbab4c8d3e9b7c618 that is in expected state
 2010-12-13 12:57:00,367 DEBUG org.apache.hadoop.hbase.zookeeper.ZKUtil: 
 master:6-0x12c780205200116 Retrieved 127 byte(s) of data from znode 
 /hbase/unassigned/c5005ca650c7e3bdbab4c8d3e9b7c618; data=
 2010-12-13 12:57:00,373 DEBUG 
 org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: 
 master:6-0x12c780205200116 Received ZooKeeper Event, type=NodeDeleted, 
 state=SyncConnected, path=/hbase/unassigned/
 2010-12-13 12:57:00,374 DEBUG 
 org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: 
 master:6-0x12c780205200116 Received ZooKeeper Event, 
 type=NodeChildrenChanged, state=SyncConnected, path=/hbase/una
 2010-12-13 12:57:00,374 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: 
 master:6-0x12c780205200116 Successfully deleted unassigned node for 
 region c5005ca650c7e3bdbab4c8d3e9b7c618 in expected sta
 2010-12-13 12:57:00,374 DEBUG org.apache.hadoop.hbase.zookeeper.ZKUtil: 
 master:6-0x12c780205200116 Retrieved 114 byte(s) of data from znode 
 /hbase/unassigned/3dc9df76f111271c150c853716ce1f07 and se
 2010-12-13 12:57:00,376 FATAL org.apache.hadoop.hbase.master.HMaster: 
 Unhandled exception. Starting shutdown.
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.master.AssignmentManager.processRegionInTransition(AssignmentManager.java:263)
 at 
 org.apache.hadoop.hbase.master.AssignmentManager.processFailover(AssignmentManager.java:222)
 at 
 org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:392)
 at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:274)
 2010-12-13 12:57:00,377 INFO org.apache.hadoop.hbase.master.HMaster: Aborting

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-2441) ZK failures early in RS startup sequence cause infinite busy loop

2012-10-30 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HBASE-2441.


Resolution: Cannot Reproduce

Closing as cannot-reproduce, since this hasn't been seen in quite some time

 ZK failures early in RS startup sequence cause infinite busy loop
 -

 Key: HBASE-2441
 URL: https://issues.apache.org/jira/browse/HBASE-2441
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.20.3
Reporter: Todd Lipcon
Assignee: Todd Lipcon
  Labels: moved_from_0_20_5
 Attachments: ASF.LICENSE.NOT.GRANTED--hbase-2441.txt


 If the RS loses its ZK session before it reports for duty, the abort() call 
 will trigger an NPE, and then the stop boolean doesn't get toggled. The RS 
 will then loop forever trying to register itself in the expired ZK session, 
 and fill up the logs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-3682) Generate sources jar for tests

2012-10-30 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HBASE-3682.


Resolution: Cannot Reproduce

assuming this is invalid since we mavenized

 Generate sources jar for tests
 --

 Key: HBASE-3682
 URL: https://issues.apache.org/jira/browse/HBASE-3682
 Project: HBase
  Issue Type: Improvement
Reporter: Todd Lipcon
 Attachments: hbase-3682.txt


 We currently make source jars for the main artifact, but not the test 
 artifact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-3411) manually compact memstores?

2012-10-30 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HBASE-3411.


Resolution: Won't Fix

This seems to have been obviated by MSLAB in later 0.90, 0.92+

 manually compact memstores?
 ---

 Key: HBASE-3411
 URL: https://issues.apache.org/jira/browse/HBASE-3411
 Project: HBase
  Issue Type: Brainstorming
  Components: regionserver
Reporter: Todd Lipcon
 Attachments: hbase-3411.txt


 I have a theory and some experiments that indicate our heap fragmentation 
 issues has to do with the KV buffers from memstores ending up entirely 
 interleaved in the old gen. I had a bit of wine and came up with a wacky idea 
 to have a thread which continuously defragments memstore data buffers into 
 contiguous segments, hopefully to keep old gen fragmentation down.
 It didn't seem to work just yet, but wanted to show the patch to some people.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7055) port HBASE-6371 tier-based compaction from 0.89-fb to trunk

2012-10-30 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487264#comment-13487264
 ] 

Todd Lipcon commented on HBASE-7055:


I agree this needs some docs. I tried to look it over and barely understood it, 
even as a guy with a bit of hbase experience ;-)

 port HBASE-6371 tier-based compaction from 0.89-fb to trunk
 ---

 Key: HBASE-7055
 URL: https://issues.apache.org/jira/browse/HBASE-7055
 Project: HBase
  Issue Type: Task
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-6371-squashed.patch, HBASE-6371-v2-squashed.patch


 There's divergence in the code :(
 See HBASE-6371 for details.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-2342) Consider adding a watchdog node next to region server

2012-10-30 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HBASE-2342.


Resolution: Won't Fix

I think this has been obviated by other work, in particular nkeywal's work to 
kill the ZK node as soon as the RS dies. We can re-open this if anyone sees a 
good reason.

 Consider adding a watchdog node next to region server
 -

 Key: HBASE-2342
 URL: https://issues.apache.org/jira/browse/HBASE-2342
 Project: HBase
  Issue Type: New Feature
  Components: regionserver
Reporter: Todd Lipcon

 This idea has been bandied about a fair amount. The concept is to add a 
 second java process that runs next to each region server to act as a 
 watchdog. Several possible purposes:
 - monitor the RS for liveness - if it exhibits Juliet syndrome (appears 
 dead) then we kill it agressively to prevent it from coming back to life
 - restart RS automatically in failure cases
 - potentially move the entire ZK session to the watchdog to decouple node 
 liveness from the particular JVM liveness
 Let's discuss in this JIRA.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-2363) Add configuration for tunable dataloss sensitivity

2012-10-30 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HBASE-2363.


Resolution: Duplicate

this has been somewhat addressed by hbase.skip.errors and 
hbase.hlog.split.skip.errors, implemented elsewhere

 Add configuration for tunable dataloss sensitivity
 --

 Key: HBASE-2363
 URL: https://issues.apache.org/jira/browse/HBASE-2363
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.20.3, 0.90.0
Reporter: Todd Lipcon

 There are many cases in HBase when the process detects that some data is not 
 quite right. We often have two choices: (a) throw up our hands and exit, or 
 (b) log a warning and push through the issue, potentially losing data. I 
 think some users are very sensitive to dataloss and would prefer that the 
 system become unavailable rather than continue with potentially lost data. 
 Other users just want the system to stay up, and if they lose a log segment 
 it's fine.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-2444) LeaseStillHeldException is overloaded for other meanings in RS management

2012-10-30 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HBASE-2444.


Resolution: Not A Problem

no longer a problem - this exception is now only used by scanner lease 
management, not overloaded in master/rs

 LeaseStillHeldException is overloaded for other meanings in RS management
 -

 Key: HBASE-2444
 URL: https://issues.apache.org/jira/browse/HBASE-2444
 Project: HBase
  Issue Type: Improvement
  Components: master, regionserver
Reporter: Todd Lipcon
Priority: Minor

 If a region server that has already been declared dead reports to the master, 
 the master throws LeaseStillHeldException. This is not a very descriptive 
 exception for this case - we should either add a new exception for this 
 purpose, or make a general exception like RegionServerStateException and use 
 a descriptive message.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-2629) Piggyback basic alarm framework on RS heartbeats

2012-10-30 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HBASE-2629.


Resolution: Won't Fix

Upon reflection a couple years later, I think we should just make sure to emit 
useful WARN logs for cases like this. Existing cluster monitoring systems (eg 
splunk or CM) are better suited to surface these logs to users than anything w 
could build ourselves inside hbase

 Piggyback basic alarm framework on RS heartbeats
 --

 Key: HBASE-2629
 URL: https://issues.apache.org/jira/browse/HBASE-2629
 Project: HBase
  Issue Type: New Feature
  Components: master, regionserver
Reporter: Todd Lipcon

 There are a number of system conditions that can cause HBase to perform badly 
 or have stability issues. For example, significant swapping activity or 
 overloaded ZK will result in all kinds of problems.
 It would be nice to put a very lightweight alarm framework in place, so 
 that when the RS notices something is amiss, it can raise an alarm flag for 
 some period of time. These could be exposed by JMX to external monitoring 
 tools, and also displayed on the master web UI.
 Some example alarms:
 - ZK read took 1000ms
 - Long garbage collection pause detected
 - Writes blocked on region for longer than 5 seconds
 etc etc

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7055) port HBASE-6371 tier-based compaction from 0.89-fb to trunk

2012-10-30 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487493#comment-13487493
 ] 

Todd Lipcon commented on HBASE-7055:


What I'm interested in is docs about how to _use_ the feature. Javadocs are for 
developers, not users. The refactoring is nice, but the new tiered compaction 
policy looks quite advanced, and introduces a new configuration file, which 
isn't obvious how to configure.

 port HBASE-6371 tier-based compaction from 0.89-fb to trunk
 ---

 Key: HBASE-7055
 URL: https://issues.apache.org/jira/browse/HBASE-7055
 Project: HBase
  Issue Type: Task
  Components: Compaction
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.96.0

 Attachments: HBASE-6371-squashed.patch, HBASE-6371-v2-squashed.patch


 There's divergence in the code :(
 See HBASE-6371 for details.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-4268) Add utility to entirely clear out ZK

2012-10-29 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13486472#comment-13486472
 ] 

Todd Lipcon commented on HBASE-4268:


Perhaps we can resolve this now, as ZK 3.4.x have now become fairly commonplace?

 Add utility to entirely clear out ZK
 

 Key: HBASE-4268
 URL: https://issues.apache.org/jira/browse/HBASE-4268
 Project: HBase
  Issue Type: New Feature
  Components: scripts
Affects Versions: 0.92.0
Reporter: Todd Lipcon

 In disaster scenarios, sometimes some cruft is left over in ZK, when it would 
 be better to do a truely clean startup. We should add a script which allows 
 the admin to clear out ZK while the cluster is down.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6980) Parallel Flushing Of Memstores

2012-10-17 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478212#comment-13478212
 ] 

Todd Lipcon commented on HBASE-6980:


If I remember correctly, there is a reason for the flush marker: it ensures 
that the RS hasn't been fenced on HDFS -- i.e that it hasn't lost its 
connection to ZK and already had its log splitting started.

The reason this is important is that, otherwise, it could move on to delete old 
log segments, which would potentially break the log split process.

It may be that the locking can be more lax, though.

 Parallel Flushing Of Memstores
 --

 Key: HBASE-6980
 URL: https://issues.apache.org/jira/browse/HBASE-6980
 Project: HBase
  Issue Type: New Feature
Reporter: Kannan Muthukkaruppan
Assignee: Kannan Muthukkaruppan

 For write dominated workloads, single threaded memstore flushing is an 
 unnecessary bottleneck. With a single flusher thread, we are basically not 
 setup to take advantage of the aggregate throughput that multi-disk nodes 
 provide.
 * For puts with WAL enabled, the bottleneck is more likely the single WAL 
 per region server. So this particular fix may not buy as much unless we 
 unlock that bottleneck with multiple commit logs per region server. (Topic 
 for a separate JIRA-- HBASE-6981).
 * But for puts with WAL disabled (e.g., when using HBASE-5783 style fast bulk 
 imports), we should be able to support much better ingest rates with parallel 
 flushing of memstores.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7005) Upgrade Thrift lib to 0.9.0

2012-10-17 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478339#comment-13478339
 ] 

Todd Lipcon commented on HBASE-7005:


Makes sense. Let's do this in trunk only, since the pom dependency change can 
hurt people's compatibility in earlier versions?

 Upgrade Thrift lib to 0.9.0
 ---

 Key: HBASE-7005
 URL: https://issues.apache.org/jira/browse/HBASE-7005
 Project: HBase
  Issue Type: Bug
  Components: Thrift
Reporter: Jake Farrell
Priority: Minor
 Attachments: Hbase-7005.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6852) SchemaMetrics.updateOnCacheHit costs too much while full scanning a table with all of its fields

2012-10-08 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13471764#comment-13471764
 ] 

Todd Lipcon commented on HBASE-6852:


bq. @Todd: Re: ThreadLocal. We had a bunch of incidents a few years back at 
Salesforce where it turned out that accessing threadlocals is not free.

Agreed, it involves a lookup in a hashmap. But we could do that lookup once, 
and pass it through the whole scanner stack, etc, in some kind of ScanContext 
parameter. That would be helpful for a bunch of places where we currently use 
threadlocals (metrics, rpc call cancellation checks, tracing, etc)

 SchemaMetrics.updateOnCacheHit costs too much while full scanning a table 
 with all of its fields
 

 Key: HBASE-6852
 URL: https://issues.apache.org/jira/browse/HBASE-6852
 Project: HBase
  Issue Type: Improvement
  Components: metrics
Affects Versions: 0.94.0
Reporter: Cheng Hao
Priority: Minor
  Labels: performance
 Fix For: 0.94.3, 0.96.0

 Attachments: AtomicTest.java, onhitcache-trunk.patch


 The SchemaMetrics.updateOnCacheHit costs too much while I am doing the full 
 table scanning.
 Here is the top 5 hotspots within regionserver while full scanning a table: 
 (Sorry for the less-well-format)
 CPU: Intel Westmere microarchitecture, speed 2.262e+06 MHz (estimated)
 Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit 
 mask of 0x00 (No unit mask) count 500
 samples  %image name   symbol name
 ---
 9844713.4324  14033.jo void 
 org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics.updateOnCacheHit(org.apache.hadoop.hbase.io.hfile.BlockType$BlockCategory,
  boolean)
   98447100.000  14033.jo void 
 org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics.updateOnCacheHit(org.apache.hadoop.hbase.io.hfile.BlockType$BlockCategory,
  boolean) [self]
 ---
 45814 6.2510  14033.jo int 
 org.apache.hadoop.hbase.KeyValue$KeyComparator.compareRows(byte[], int, int, 
 byte[], int, int)
   45814100.000  14033.jo int 
 org.apache.hadoop.hbase.KeyValue$KeyComparator.compareRows(byte[], int, int, 
 byte[], int, int) [self]
 ---
 43523 5.9384  14033.jo boolean 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(org.apache.hadoop.hbase.KeyValue)
   43523100.000  14033.jo boolean 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(org.apache.hadoop.hbase.KeyValue)
  [self]
 ---
 42548 5.8054  14033.jo int 
 org.apache.hadoop.hbase.KeyValue$KeyComparator.compare(byte[], int, int, 
 byte[], int, int)
   42548100.000  14033.jo int 
 org.apache.hadoop.hbase.KeyValue$KeyComparator.compare(byte[], int, int, 
 byte[], int, int) [self]
 ---
 40572 5.5358  14033.jo int 
 org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.binarySearchNonRootIndex(byte[],
  int, int, java.nio.ByteBuffer, org.apache.hadoop.io.RawComparator)~1
   40572100.000  14033.jo int 
 org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.binarySearchNonRootIndex(byte[],
  int, int, java.nio.ByteBuffer, org.apache.hadoop.io.RawComparator)~1 [self]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6923) Create scanner benchmark

2012-10-02 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HBASE-6923:
---

Attachment: TestStorePerformance.java

Hey Karthik. Here's a test I wrote recently which may help you out. I also have 
some interesting results I'm hoping to share later this week.

 Create scanner benchmark
 

 Key: HBASE-6923
 URL: https://issues.apache.org/jira/browse/HBASE-6923
 Project: HBase
  Issue Type: Sub-task
Reporter: Karthik Ranganathan
Assignee: Karthik Ranganathan
 Attachments: TestStorePerformance.java


 Create a simple program to benchmark performance/throughput of scanners, and 
 print some results at the end.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6852) SchemaMetrics.updateOnCacheHit costs too much while full scanning a table with all of its fields

2012-09-21 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13460689#comment-13460689
 ] 

Todd Lipcon commented on HBASE-6852:


I have a full table scan in isolation benchmark I've been working on. My 
benchmark currently disables metrics, so I haven't seen this, but I'll add a 
flag to it to enable metrics and see if I can reproduce. Since it runs in 
isolation it's easy to run under perf stat and get cycle counts, etc, out of 
it. Will report back next week.

 SchemaMetrics.updateOnCacheHit costs too much while full scanning a table 
 with all of its fields
 

 Key: HBASE-6852
 URL: https://issues.apache.org/jira/browse/HBASE-6852
 Project: HBase
  Issue Type: Improvement
  Components: metrics
Affects Versions: 0.94.0
Reporter: Cheng Hao
Priority: Minor
  Labels: performance
 Fix For: 0.94.3, 0.96.0

 Attachments: onhitcache-trunk.patch


 The SchemaMetrics.updateOnCacheHit costs too much while I am doing the full 
 table scanning.
 Here is the top 5 hotspots within regionserver while full scanning a table: 
 (Sorry for the less-well-format)
 CPU: Intel Westmere microarchitecture, speed 2.262e+06 MHz (estimated)
 Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit 
 mask of 0x00 (No unit mask) count 500
 samples  %image name   symbol name
 ---
 9844713.4324  14033.jo void 
 org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics.updateOnCacheHit(org.apache.hadoop.hbase.io.hfile.BlockType$BlockCategory,
  boolean)
   98447100.000  14033.jo void 
 org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics.updateOnCacheHit(org.apache.hadoop.hbase.io.hfile.BlockType$BlockCategory,
  boolean) [self]
 ---
 45814 6.2510  14033.jo int 
 org.apache.hadoop.hbase.KeyValue$KeyComparator.compareRows(byte[], int, int, 
 byte[], int, int)
   45814100.000  14033.jo int 
 org.apache.hadoop.hbase.KeyValue$KeyComparator.compareRows(byte[], int, int, 
 byte[], int, int) [self]
 ---
 43523 5.9384  14033.jo boolean 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(org.apache.hadoop.hbase.KeyValue)
   43523100.000  14033.jo boolean 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(org.apache.hadoop.hbase.KeyValue)
  [self]
 ---
 42548 5.8054  14033.jo int 
 org.apache.hadoop.hbase.KeyValue$KeyComparator.compare(byte[], int, int, 
 byte[], int, int)
   42548100.000  14033.jo int 
 org.apache.hadoop.hbase.KeyValue$KeyComparator.compare(byte[], int, int, 
 byte[], int, int) [self]
 ---
 40572 5.5358  14033.jo int 
 org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.binarySearchNonRootIndex(byte[],
  int, int, java.nio.ByteBuffer, org.apache.hadoop.io.RawComparator)~1
   40572100.000  14033.jo int 
 org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.binarySearchNonRootIndex(byte[],
  int, int, java.nio.ByteBuffer, org.apache.hadoop.io.RawComparator)~1 [self]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6852) SchemaMetrics.updateOnCacheHit costs too much while full scanning a table with all of its fields

2012-09-21 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13460698#comment-13460698
 ] 

Todd Lipcon commented on HBASE-6852:


If using an array of longs, we'd get a ton of cache contention effects. 
Whatever we do should be cache-line padded to avoid this perf hole.

Having a per-thread (ThreadLocal) metrics array isn't a bad way to go: no 
contention, can use non-volatile types, and can be stale-read during metrics 
snapshots by just iterating over all the threads.

 SchemaMetrics.updateOnCacheHit costs too much while full scanning a table 
 with all of its fields
 

 Key: HBASE-6852
 URL: https://issues.apache.org/jira/browse/HBASE-6852
 Project: HBase
  Issue Type: Improvement
  Components: metrics
Affects Versions: 0.94.0
Reporter: Cheng Hao
Priority: Minor
  Labels: performance
 Fix For: 0.94.3, 0.96.0

 Attachments: onhitcache-trunk.patch


 The SchemaMetrics.updateOnCacheHit costs too much while I am doing the full 
 table scanning.
 Here is the top 5 hotspots within regionserver while full scanning a table: 
 (Sorry for the less-well-format)
 CPU: Intel Westmere microarchitecture, speed 2.262e+06 MHz (estimated)
 Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit 
 mask of 0x00 (No unit mask) count 500
 samples  %image name   symbol name
 ---
 9844713.4324  14033.jo void 
 org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics.updateOnCacheHit(org.apache.hadoop.hbase.io.hfile.BlockType$BlockCategory,
  boolean)
   98447100.000  14033.jo void 
 org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics.updateOnCacheHit(org.apache.hadoop.hbase.io.hfile.BlockType$BlockCategory,
  boolean) [self]
 ---
 45814 6.2510  14033.jo int 
 org.apache.hadoop.hbase.KeyValue$KeyComparator.compareRows(byte[], int, int, 
 byte[], int, int)
   45814100.000  14033.jo int 
 org.apache.hadoop.hbase.KeyValue$KeyComparator.compareRows(byte[], int, int, 
 byte[], int, int) [self]
 ---
 43523 5.9384  14033.jo boolean 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(org.apache.hadoop.hbase.KeyValue)
   43523100.000  14033.jo boolean 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(org.apache.hadoop.hbase.KeyValue)
  [self]
 ---
 42548 5.8054  14033.jo int 
 org.apache.hadoop.hbase.KeyValue$KeyComparator.compare(byte[], int, int, 
 byte[], int, int)
   42548100.000  14033.jo int 
 org.apache.hadoop.hbase.KeyValue$KeyComparator.compare(byte[], int, int, 
 byte[], int, int) [self]
 ---
 40572 5.5358  14033.jo int 
 org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.binarySearchNonRootIndex(byte[],
  int, int, java.nio.ByteBuffer, org.apache.hadoop.io.RawComparator)~1
   40572100.000  14033.jo int 
 org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.binarySearchNonRootIndex(byte[],
  int, int, java.nio.ByteBuffer, org.apache.hadoop.io.RawComparator)~1 [self]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6852) SchemaMetrics.updateOnCacheHit costs too much while full scanning a table with all of its fields

2012-09-21 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13460736#comment-13460736
 ] 

Todd Lipcon commented on HBASE-6852:


bq. getAndIncrement into just one cpu instruction

True, but it's a pretty expensive instruction, since it has to steal that cache 
line from whichever other core used it previously, and I believe acts as a full 
memory barrier as well (eg flushing write-combining buffers)


The cliff click counter is effective but has more memory usage. Aggregating 
stuff locally and pushing to metrics seems ideal, but if we can't do that 
easily, then having the metrics per-thread and then occasionally grabbing them 
would work too. Memcached metrics work like that.

 SchemaMetrics.updateOnCacheHit costs too much while full scanning a table 
 with all of its fields
 

 Key: HBASE-6852
 URL: https://issues.apache.org/jira/browse/HBASE-6852
 Project: HBase
  Issue Type: Improvement
  Components: metrics
Affects Versions: 0.94.0
Reporter: Cheng Hao
Priority: Minor
  Labels: performance
 Fix For: 0.94.3, 0.96.0

 Attachments: onhitcache-trunk.patch


 The SchemaMetrics.updateOnCacheHit costs too much while I am doing the full 
 table scanning.
 Here is the top 5 hotspots within regionserver while full scanning a table: 
 (Sorry for the less-well-format)
 CPU: Intel Westmere microarchitecture, speed 2.262e+06 MHz (estimated)
 Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit 
 mask of 0x00 (No unit mask) count 500
 samples  %image name   symbol name
 ---
 9844713.4324  14033.jo void 
 org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics.updateOnCacheHit(org.apache.hadoop.hbase.io.hfile.BlockType$BlockCategory,
  boolean)
   98447100.000  14033.jo void 
 org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics.updateOnCacheHit(org.apache.hadoop.hbase.io.hfile.BlockType$BlockCategory,
  boolean) [self]
 ---
 45814 6.2510  14033.jo int 
 org.apache.hadoop.hbase.KeyValue$KeyComparator.compareRows(byte[], int, int, 
 byte[], int, int)
   45814100.000  14033.jo int 
 org.apache.hadoop.hbase.KeyValue$KeyComparator.compareRows(byte[], int, int, 
 byte[], int, int) [self]
 ---
 43523 5.9384  14033.jo boolean 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(org.apache.hadoop.hbase.KeyValue)
   43523100.000  14033.jo boolean 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(org.apache.hadoop.hbase.KeyValue)
  [self]
 ---
 42548 5.8054  14033.jo int 
 org.apache.hadoop.hbase.KeyValue$KeyComparator.compare(byte[], int, int, 
 byte[], int, int)
   42548100.000  14033.jo int 
 org.apache.hadoop.hbase.KeyValue$KeyComparator.compare(byte[], int, int, 
 byte[], int, int) [self]
 ---
 40572 5.5358  14033.jo int 
 org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.binarySearchNonRootIndex(byte[],
  int, int, java.nio.ByteBuffer, org.apache.hadoop.io.RawComparator)~1
   40572100.000  14033.jo int 
 org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.binarySearchNonRootIndex(byte[],
  int, int, java.nio.ByteBuffer, org.apache.hadoop.io.RawComparator)~1 [self]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-3834) Store ignores checksum errors when opening files

2012-09-17 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457176#comment-13457176
 ] 

Todd Lipcon commented on HBASE-3834:


Hi Liang. Thanks so much for testing this out. We really appreciate it!

So, let's leave this open to be fixed for an 0.90.x release if we can. Maybe 
Jon or Ram might be interested in taking it up?

 Store ignores checksum errors when opening files
 

 Key: HBASE-3834
 URL: https://issues.apache.org/jira/browse/HBASE-3834
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.90.2
Reporter: Todd Lipcon
Assignee: liang xie
Priority: Critical
 Fix For: 0.90.8

 Attachments: hbase-3834.tar.gz2


 If you corrupt one of the storefiles in a region (eg using vim to muck up 
 some bytes), the region will still open, but that storefile will just be 
 ignored with a log message. We should probably not do this in general - 
 better to keep that region unassigned and force an admin to make a decision 
 to remove the bad storefile.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6798) HDFS always read checksum form meta file

2012-09-17 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457327#comment-13457327
 ] 

Todd Lipcon commented on HBASE-6798:


Fixing this for remote reads (ie not short circuit ones) is going to be 
somewhat tricky for Hadoop 1.0, because we need to keep protocol compatibility. 
Doing it for Hadoop 2 shouldn't be bad, because we have protobufs, but will 
still take a bit of careful HDFS surgery. I vaguely remember an existing HDFS 
JIRA about this, but now not sure where it went. Anyone remember the number or 
should we re-file?

 HDFS always read checksum form meta file
 

 Key: HBASE-6798
 URL: https://issues.apache.org/jira/browse/HBASE-6798
 Project: HBase
  Issue Type: Bug
  Components: performance
Affects Versions: 0.94.0, 0.94.1
Reporter: LiuLei
Priority: Blocker

 I use hbase0.941 and hadoop-0.20.2-cdh3u5 version.
 The HBase support checksums in HBase block cache in HBASE-5074 jira.
 The  HBase  support checksums for decrease the iops of  HDFS, so that HDFS
 dont't need to read the checksum from meta file of block file.
 But in hadoop-0.20.2-cdh3u5 version, BlockSender still read the metadata file 
 even if the
  hbase.regionserver.checksum.verify property is ture.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-3834) Store ignores checksum errors when opening files

2012-09-14 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456000#comment-13456000
 ] 

Todd Lipcon commented on HBASE-3834:


Great to see this is fixed in 0.94.

Does someone have time to try this on the latest 0.90.x, which is still in 
production a lot of places?

 Store ignores checksum errors when opening files
 

 Key: HBASE-3834
 URL: https://issues.apache.org/jira/browse/HBASE-3834
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.90.2
Reporter: Todd Lipcon
Assignee: liang xie
Priority: Critical
 Fix For: 0.90.8

 Attachments: hbase-3834.tar.gz2


 If you corrupt one of the storefiles in a region (eg using vim to muck up 
 some bytes), the region will still open, but that storefile will just be 
 ignored with a log message. We should probably not do this in general - 
 better to keep that region unassigned and force an admin to make a decision 
 to remove the bad storefile.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6783) Make read short circuit the default

2012-09-14 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456003#comment-13456003
 ] 

Todd Lipcon commented on HBASE-6783:


To be clear, this is only for the tests, right? We can't make it the default in 
production because it requires server-side changes.

 Make read short circuit the default
 ---

 Key: HBASE-6783
 URL: https://issues.apache.org/jira/browse/HBASE-6783
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 0.96.0
Reporter: nkeywal
Assignee: nkeywal
 Fix For: 0.96.0

 Attachments: HBASE-6783.v1.patch


 Per mailing discussion, read short circuit has little or no drawback, hence 
 should used by default. As a consequence, we activate it on the default tests.
 It's possible to launch the test with -Ddfs.client.read.shortcircuit=false to 
 execute the tests without the shortcircuit, it will be used for some builds 
 on trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6783) Make read short circuit the default

2012-09-14 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456008#comment-13456008
 ] 

Todd Lipcon commented on HBASE-6783:


+  String readOnConf = conf.get(dfs.client.read.shortcircuit);
+  return (readOnConf == null ? true : Boolean.parseBoolean(readOnConf));

can use conf.getBoolean()

The config/property name should also be clear that it's a setting for tests - 
eg hbase.tests.use.shortcircuit.reads


+  private void readShortCircuit(){
+if (isReadShortCircuitOn()){
+  String curUser = System.getProperty(user.name);
+  LOG.info(read short circuit is ON for user +curUser);

style: space before {s, space after '+'
rename to enableReadShortCircuit()



+if (util.isReadShortCircuitOn()){
+  LOG.info(dfs.client.read.shortcircuit is on,  +
+  testFullSystemBubblesFSErrors is not executed);
+  return;
+}
Can use junit Assume here



- there's a spurious whitespace change

 Make read short circuit the default
 ---

 Key: HBASE-6783
 URL: https://issues.apache.org/jira/browse/HBASE-6783
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 0.96.0
Reporter: nkeywal
Assignee: nkeywal
 Fix For: 0.96.0

 Attachments: HBASE-6783.v1.patch


 Per mailing discussion, read short circuit has little or no drawback, hence 
 should used by default. As a consequence, we activate it on the default tests.
 It's possible to launch the test with -Ddfs.client.read.shortcircuit=false to 
 execute the tests without the shortcircuit, it will be used for some builds 
 on trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-3834) Store ignores checksum errors when opening files

2012-09-05 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448998#comment-13448998
 ] 

Todd Lipcon commented on HBASE-3834:


It's still a somewhat scary bug, if it still exists. It causes data to be 
silently missing from a table. So I hope someone will take interest in it :)

 Store ignores checksum errors when opening files
 

 Key: HBASE-3834
 URL: https://issues.apache.org/jira/browse/HBASE-3834
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.90.2
Reporter: Todd Lipcon
Priority: Critical
 Fix For: 0.90.8


 If you corrupt one of the storefiles in a region (eg using vim to muck up 
 some bytes), the region will still open, but that storefile will just be 
 ignored with a log message. We should probably not do this in general - 
 better to keep that region unassigned and force an admin to make a decision 
 to remove the bad storefile.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6524) Hooks for hbase tracing

2012-08-23 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13440412#comment-13440412
 ] 

Todd Lipcon commented on HBASE-6524:


{quote}
I think the intent of putting htrace into maven repo is for wider adoption.
I wonder if the above namespace (involving cloudera which should not be an org) 
would serve that purpose well.
{quote}

Hey Ted. We use {{org.cloudera}} here to distinguish that this is a fully open 
source component, distinct from other software like Cloudera Manager which uses 
{{com.cloudera}}. The project is open on github and we fully anticipate that, 
if some community springs up around contributions, we'll accept pull requests 
and eventually move it to the Apache Incubator. At this point, though, it would 
be inappropriate to publish under {{org.apache}}

Consider it the same as a project like Guava which is under Google's namespace 
but is still open source.

 Hooks for hbase tracing
 ---

 Key: HBASE-6524
 URL: https://issues.apache.org/jira/browse/HBASE-6524
 Project: HBase
  Issue Type: Sub-task
Reporter: Jonathan Leavitt
 Fix For: 0.96.0

 Attachments: 6524.addendum, createTableTrace.png, hbase-6524.diff


 Includes the hooks that use [htrace|http://www.github.com/cloudera/htrace] 
 library to add dapper-like tracing to hbase.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6621) Reduce calls to Bytes.toInt

2012-08-21 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438787#comment-13438787
 ] 

Todd Lipcon commented on HBASE-6621:


Oops, yea, I missed the fact that the caching was removed again in the later 
patches. The change makes sense to me.

 Reduce calls to Bytes.toInt
 ---

 Key: HBASE-6621
 URL: https://issues.apache.org/jira/browse/HBASE-6621
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.96.0, 0.94.2

 Attachments: 6621-0.96.txt, 6621-0.96-v2.txt, 6621-0.96-v3.txt, 
 6621-0.96-v4.txt


 Bytes.toInt shows up quite often in a profiler run.
 It turns out that one source is HFileReaderV2$ScannerV2.getKeyValue().
 Notice that we call the KeyValue(byte[], int) constructor, which forces the 
 constructor to determine its size by reading some of the header information 
 and calculate the size. In this case, however, we already know the size (from 
 the call to readKeyValueLen), so we could just use that.
 In the extreme case of 1's of columns this noticeably reduces CPU. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6621) Reduce calls to Bytes.toInt

2012-08-20 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438430#comment-13438430
 ] 

Todd Lipcon commented on HBASE-6621:


Do you have benchmark results showing an improvement in actual scan speed?

When I looked into scan performance with oprofile a few months back, I found 
the same as you -- that a lot of time was spent in these calls. But when I also 
added cache miss counters to the profile, I found the reason was cache misses, 
not the actual CPU usage of the function. So caching them would just shift 
around the cache miss to the next access of the cache line elsewhere, without 
actually improving total performance.

 Reduce calls to Bytes.toInt
 ---

 Key: HBASE-6621
 URL: https://issues.apache.org/jira/browse/HBASE-6621
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.96.0, 0.94.2

 Attachments: 6621-0.96.txt, 6621-0.96-v2.txt, 6621-0.96-v3.txt, 
 6621-0.96-v4.txt


 Bytes.toInt shows up quite often in a profiler run.
 It turns out that one source is HFileReaderV2$ScannerV2.getKeyValue().
 Notice that we call the KeyValue(byte[], int) constructor, which forces the 
 constructor to determine its size by reading some of the header information 
 and calculate the size. In this case, however, we already know the size (from 
 the call to readKeyValueLen), so we could just use that.
 In the extreme case of 1's of columns this noticeably reduces CPU. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6586) Quarantine Corrupted HFiles

2012-08-15 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435244#comment-13435244
 ] 

Todd Lipcon commented on HBASE-6586:


Can you loop that test until it fails, perhaps? I think getting full logs from 
a run is necessary to determine if it's an HDFS or HBase bug.

 Quarantine Corrupted HFiles
 ---

 Key: HBASE-6586
 URL: https://issues.apache.org/jira/browse/HBASE-6586
 Project: HBase
  Issue Type: Bug
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh

 We've encountered a few upgrades from 0.90 hbases + 20.2/1.x hdfs to 0.92 
 hbases + hdfs 2.x that get stuck.  I haven't been able to duplicate the 
 problem in my dev environment but we suspect this may be related to 
 HDFS-3731.  On the HBase side, it seems reasonable to quarantine what are 
 most likely truncated hfiles, so that can could later be recovered.
 Here's an example of the exception we've encountered:
 {code}
 2012-07-18 05:55:01,152 ERROR handler.OpenRegionHandler 
 (OpenRegionHandler.java:openRegion(346)) - Failed open of 
 region=user_mappings,080112102AA76EF98197605D341B9E6C5824D2BC|1001,1317824890618.eaed0e7abc6d27d28ff0e5a9b49c4c
 0d. 
 java.io.IOException: java.lang.IllegalArgumentException: Invalid HFile 
 version: 842220600 (expected to be between 1 and 2) 
 at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:306)
  
 at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:371) 
 at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:387) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile$Reader.init(StoreFile.java:1026)
  
 at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:485) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:566)
  
 at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:286) 
 at org.apache.hadoop.hbase.regionserver.Store.init(Store.java:223) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2534)
  
 at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:454) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3282) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3230) 
 at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:331)
 at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:107)
 at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169) 
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
  
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
  
 at java.lang.Thread.run(Thread.java:619) 
 Caused by: java.lang.IllegalArgumentException: Invalid HFile version: 
 842220600 (expected to be between 1 and 2) 
 at org.apache.hadoop.hbase.io.hfile.HFile.checkFormatVersion(HFile.java:515) 
 at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:303)
  
 ... 17 more
 {code}
 Specifically -- the FixedFileTrailer are incorrect, and seemingly missing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6579) Unnecessary KV order check in StoreScanner

2012-08-14 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433938#comment-13433938
 ] 

Todd Lipcon commented on HBASE-6579:


Maybe change to an assert so that it still runs in the context of our test 
cases, but not in real clusters?

 Unnecessary KV order check in StoreScanner
 --

 Key: HBASE-6579
 URL: https://issues.apache.org/jira/browse/HBASE-6579
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Priority: Minor
 Fix For: 0.96.0, 0.94.2


 In StoreScanner.next(ListKeyValue, int, String) I find this code:
 {code}
   // Check that the heap gives us KVs in an increasing order.
   if (prevKV != null  comparator != null
comparator.compare(prevKV, kv)  0) {
 throw new IOException(Key  + prevKV +  followed by a  +
 smaller key  + kv +  in cf  + store);
   }
   prevKV = kv;
 {code}
 So this checks for bugs in the HFiles or the scanner code. It needs to 
 compare each KVs with its predecessor. This seems unnecessary now, I propose 
 that we remove this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6586) Quarantine Corrupted HFiles

2012-08-14 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13434683#comment-13434683
 ] 

Todd Lipcon commented on HBASE-6586:


Automatically quarantining the files seems like a dangerous default. 
Quarantining the region in some way such that all calls to it would fail until 
the administrator fixes it seems like a better approach. My reasoning is that 
quarantining an HFile is silent data loss (or inconsistency). Data may 
reappear or revert to an old version. We can't accept that without a user 
confirming it.

I don't think this is related to HDFS-3731 -- the truncation we saw was not on 
a block boundary, and that bug would only cause the disappearance of an entire 
block.

 Quarantine Corrupted HFiles
 ---

 Key: HBASE-6586
 URL: https://issues.apache.org/jira/browse/HBASE-6586
 Project: HBase
  Issue Type: Bug
Reporter: Jonathan Hsieh

 We've encountered a few upgrades from 0.90 hbases + 20.2/1.x hdfs to 0.92 
 hbases + hdfs 2.x that get stuck.  I haven't been able to duplicate the 
 problem in my dev environment but we suspect this may be related to 
 HDFS-3731.  On the HBase side, it seems reasonable to quarantine what are 
 most likely truncated hfiles, so that can could later be recovered.
 Here's an example of the exception we've encountered:
 {code}
 2012-07-18 05:55:01,152 ERROR handler.OpenRegionHandler 
 (OpenRegionHandler.java:openRegion(346)) - Failed open of 
 region=user_mappings,080112102AA76EF98197605D341B9E6C5824D2BC|1001,1317824890618.eaed0e7abc6d27d28ff0e5a9b49c4c
 0d. 
 java.io.IOException: java.lang.IllegalArgumentException: Invalid HFile 
 version: 842220600 (expected to be between 1 and 2) 
 at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:306)
  
 at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:371) 
 at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:387) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile$Reader.init(StoreFile.java:1026)
  
 at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:485) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:566)
  
 at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:286) 
 at org.apache.hadoop.hbase.regionserver.Store.init(Store.java:223) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2534)
  
 at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:454) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3282) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3230) 
 at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:331)
 at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:107)
 at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169) 
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
  
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
  
 at java.lang.Thread.run(Thread.java:619) 
 Caused by: java.lang.IllegalArgumentException: Invalid HFile version: 
 842220600 (expected to be between 1 and 2) 
 at org.apache.hadoop.hbase.io.hfile.HFile.checkFormatVersion(HFile.java:515) 
 at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:303)
  
 ... 17 more
 {code}
 Specifically -- the FixedFileTrailer are incorrect, and seemingly missing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6575) Add SPM for HBase to Ref Guide

2012-08-13 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433714#comment-13433714
 ] 

Todd Lipcon commented on HBASE-6575:


If we point to one commercial monitoring product, we should probably point to 
all, which is kind of a slippery slope. What do folks think?

 Add SPM for HBase to Ref Guide
 --

 Key: HBASE-6575
 URL: https://issues.apache.org/jira/browse/HBASE-6575
 Project: HBase
  Issue Type: Improvement
  Components: documentation
Reporter: Otis Gospodnetic
Priority: Minor
 Attachments: HBASE-6575.patch


 Ref Guide should point users to SPM for HBase in monitoring section(s).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6358) Bulkloading from remote filesystem is problematic

2012-08-06 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13429359#comment-13429359
 ] 

Todd Lipcon commented on HBASE-6358:


bq. @Todd, would you be in favor of adding another JIRA ticket for a 
distributed bulk loader, and having this ticket be blocked until it's done? I 
think it should be blocked so we don't remove the current bulkload from remote 
fs capability without offering an alternative, though the user does have the 
option of running distcp themselves.

I could go either way on this. Up to folks who are more actively contributing 
code than I :)

 Bulkloading from remote filesystem is problematic
 -

 Key: HBASE-6358
 URL: https://issues.apache.org/jira/browse/HBASE-6358
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.0
Reporter: Dave Revell
Assignee: Dave Revell
 Attachments: 6358-suggestion.txt, HBASE-6358-trunk-v1.diff, 
 HBASE-6358-trunk-v2.diff, HBASE-6358-trunk-v3.diff


 Bulk loading hfiles that don't live on the same filesystem as HBase can cause 
 problems for subtle reasons.
 In Store.bulkLoadHFile(), the regionserver will copy the source hfile to its 
 own filesystem if it's not already there. Since this can take a long time for 
 large hfiles, it's likely that the client will timeout and retry. When the 
 client retries repeatedly, there may be several bulkload operations in flight 
 for the same hfile, causing lots of unnecessary IO and tying up handler 
 threads. This can seriously impact performance. In my case, the cluster 
 became unusable and the regionservers had to be kill -9'ed.
 Possible solutions:
  # Require that hfiles already be on the same filesystem as HBase in order 
 for bulkloading to succeed. The copy could be handled by 
 LoadIncrementalHFiles before the regionserver is called.
  # Others? I'm not familiar with Hadoop IPC so there may be tricks to extend 
 the timeout or something else.
 I'm willing to write a patch but I'd appreciate recommendations on how to 
 proceed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6515) Setting request size with protobuf

2012-08-06 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13429360#comment-13429360
 ] 

Todd Lipcon commented on HBASE-6515:


Did you look into what the default max size is?

I don't think we should arbitrarily raise the limit. Instead, if replication 
sends too-large RPCs, we should figure out how to make it do smaller batches to 
fit within the limit. RPC payloads in the 10s or 100s of MBs are not good.

 Setting request size with protobuf
 --

 Key: HBASE-6515
 URL: https://issues.apache.org/jira/browse/HBASE-6515
 Project: HBase
  Issue Type: Bug
  Components: ipc, replication
Affects Versions: 0.96.0
Reporter: Himanshu Vashishtha
Priority: Critical

 While running replication on upstream code, I am hitting  the size-limit 
 exception while sending WALEdits to a different cluster.
 {code}
 com.google.protobuf.InvalidProtocolBufferException: IPC server unable to read 
 call parameters: Protocol message was too large.  May be malicious.  Use 
 CodedInputStream.setSizeLimit() to increase the size limit.
 {code}
 Do we have a property to set some max size or something?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6358) Bulkloading from remote filesystem is problematic

2012-08-03 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13428291#comment-13428291
 ] 

Todd Lipcon commented on HBASE-6358:


hmm... I don't know if I thought about it in a huge amount of detail. The 
original idea was to allow you to run an MR on one cluster, and then 
LoadIncrementalHFiles on your HBase cluster which uses a different HDFS. I 
was thinking there would be an advantage here over the distcp-then-load 
approach, because the region server doing the copy would end up with a local 
replica after the load.

That said, I didn't think through the timeout implications, which seems to be 
the issue discussed in this JIRA.

As for how to determine if they're the same, the .equals() call is supposed to 
do that, but perhaps it's not working right?

 Bulkloading from remote filesystem is problematic
 -

 Key: HBASE-6358
 URL: https://issues.apache.org/jira/browse/HBASE-6358
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.0
Reporter: Dave Revell
Assignee: Dave Revell
 Attachments: 6358-suggestion.txt, HBASE-6358-trunk-v1.diff, 
 HBASE-6358-trunk-v2.diff, HBASE-6358-trunk-v3.diff


 Bulk loading hfiles that don't live on the same filesystem as HBase can cause 
 problems for subtle reasons.
 In Store.bulkLoadHFile(), the regionserver will copy the source hfile to its 
 own filesystem if it's not already there. Since this can take a long time for 
 large hfiles, it's likely that the client will timeout and retry. When the 
 client retries repeatedly, there may be several bulkload operations in flight 
 for the same hfile, causing lots of unnecessary IO and tying up handler 
 threads. This can seriously impact performance. In my case, the cluster 
 became unusable and the regionservers had to be kill -9'ed.
 Possible solutions:
  # Require that hfiles already be on the same filesystem as HBase in order 
 for bulkloading to succeed. The copy could be handled by 
 LoadIncrementalHFiles before the regionserver is called.
  # Others? I'm not familiar with Hadoop IPC so there may be tricks to extend 
 the timeout or something else.
 I'm willing to write a patch but I'd appreciate recommendations on how to 
 proceed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6358) Bulkloading from remote filesystem is problematic

2012-08-03 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13428337#comment-13428337
 ] 

Todd Lipcon commented on HBASE-6358:


The problem of doing it automatically in LoadIncrementalHFiles (i.e the client) 
is that it is going to be very slow for any non-trivial amount of data, to 
funnel it through this single node.

Here's an alternate idea:
1. In this JIRA, change the RS side to fail if the filesystem doesn't match
2. Separately, add a new DistributedLoadIncrementalHFiles program which acts 
as a combination of distcp and LoadIncrementalHFiles. For each RS (or perhaps 
for each region), it would create one map task, with a locality hint to that 
server. Then the task would copy the relevant file (achieving a local replica) 
and make the necessary call to load the file.

Between step 1 and 2, users would have to use distcp and sacrifice locality. 
But, with the current scheme, they already don't get locality for the common 
case where the MR job runs on the same cluster as HBase.

Thoughts?

 Bulkloading from remote filesystem is problematic
 -

 Key: HBASE-6358
 URL: https://issues.apache.org/jira/browse/HBASE-6358
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.0
Reporter: Dave Revell
Assignee: Dave Revell
 Attachments: 6358-suggestion.txt, HBASE-6358-trunk-v1.diff, 
 HBASE-6358-trunk-v2.diff, HBASE-6358-trunk-v3.diff


 Bulk loading hfiles that don't live on the same filesystem as HBase can cause 
 problems for subtle reasons.
 In Store.bulkLoadHFile(), the regionserver will copy the source hfile to its 
 own filesystem if it's not already there. Since this can take a long time for 
 large hfiles, it's likely that the client will timeout and retry. When the 
 client retries repeatedly, there may be several bulkload operations in flight 
 for the same hfile, causing lots of unnecessary IO and tying up handler 
 threads. This can seriously impact performance. In my case, the cluster 
 became unusable and the regionservers had to be kill -9'ed.
 Possible solutions:
  # Require that hfiles already be on the same filesystem as HBase in order 
 for bulkloading to succeed. The copy could be handled by 
 LoadIncrementalHFiles before the regionserver is called.
  # Others? I'm not familiar with Hadoop IPC so there may be tricks to extend 
 the timeout or something else.
 I'm willing to write a patch but I'd appreciate recommendations on how to 
 proceed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6358) Bulkloading from remote filesystem is problematic

2012-08-03 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13428508#comment-13428508
 ] 

Todd Lipcon commented on HBASE-6358:


bq. not breaking the current use case of non-local bulk loading when size or 
speed requirements are modest

If the size and speed don't matter, then wouldn't you have just used a normal 
(non-bulk-load) MR job to load the data?

I think funneling the load through one host basically defeats the purpose of 
bulk load. Perhaps it could be available as an option for people just testing 
out, but I would prefer the default to be a failure, and you have to enable the 
copy with a {{-copyToCluster}} or something.

 Bulkloading from remote filesystem is problematic
 -

 Key: HBASE-6358
 URL: https://issues.apache.org/jira/browse/HBASE-6358
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.0
Reporter: Dave Revell
Assignee: Dave Revell
 Attachments: 6358-suggestion.txt, HBASE-6358-trunk-v1.diff, 
 HBASE-6358-trunk-v2.diff, HBASE-6358-trunk-v3.diff


 Bulk loading hfiles that don't live on the same filesystem as HBase can cause 
 problems for subtle reasons.
 In Store.bulkLoadHFile(), the regionserver will copy the source hfile to its 
 own filesystem if it's not already there. Since this can take a long time for 
 large hfiles, it's likely that the client will timeout and retry. When the 
 client retries repeatedly, there may be several bulkload operations in flight 
 for the same hfile, causing lots of unnecessary IO and tying up handler 
 threads. This can seriously impact performance. In my case, the cluster 
 became unusable and the regionservers had to be kill -9'ed.
 Possible solutions:
  # Require that hfiles already be on the same filesystem as HBase in order 
 for bulkloading to succeed. The copy could be handled by 
 LoadIncrementalHFiles before the regionserver is called.
  # Others? I'm not familiar with Hadoop IPC so there may be tricks to extend 
 the timeout or something else.
 I'm willing to write a patch but I'd appreciate recommendations on how to 
 proceed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-1015) pure C and C++ client libraries

2012-07-28 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13424488#comment-13424488
 ] 

Todd Lipcon commented on HBASE-1015:


Hey Andrew. Someone here at Cloudera is working on SASL support for the Thrift 
C++ bindings, I believe -- at least the client side -- which should be 
compatible with the Java server. Hopefully we'll post it to THRIFT-1620 in the 
coming weeks.

 pure C and C++ client libraries
 ---

 Key: HBASE-1015
 URL: https://issues.apache.org/jira/browse/HBASE-1015
 Project: HBase
  Issue Type: New Feature
  Components: client
Affects Versions: 0.20.6
Reporter: Andrew Purtell
Priority: Minor

 If via HBASE-794 first class support for talking via Thrift directly to 
 HMaster and HRS is available, then pure C and C++ client libraries are 
 possible. 
 The C client library would wrap a Thrift core. 
 The C++ client library can provide a class hierarchy quite close to 
 o.a.h.h.client and, ideally, identical semantics. It  should be just a 
 wrapper around the C API, for economy.
 Internally to my employer there is a lot of resistance to HBase because many 
 dev teams have a strong C/C++ bias. The real issue however is really client 
 side integration, not a fundamental objection. (What runs server side and how 
 it is managed is a secondary consideration.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




  1   2   3   4   5   6   7   8   9   10   >