My keys are built of 4  8-byte Ids. I am currently doing the load with MR
but I get a timeout when doing the loadIncrementalFiles call:

12/06/24 21:29:01 ERROR mapreduce.LoadIncrementalHFiles: Encountered
unrecoverable error from region server
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
attempts=10, exceptions:
Sun Jun 24 21:29:01 CEST 2012,
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$3@4699ecf9,
java.net.SocketTimeoutException: Call to das3002.cm.cluster/
10.141.0.79:60020
failed on socket timeout exception: java.net.SocketTimeoutException: 60000
millis timeout while waiting for channel to be ready for read. ch :
java.nio.channels.SocketChannel[co
nnected local=/10.141.0.254:43240 remote=das3002.cm.cluster/
10.141.0.79:60020]

        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWithRetries(HConnectionManager.java:1345)
        at
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.tryAtomicRegionLoad(LoadIncrementalHFiles.java:487)
        at
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$1.call(LoadIncrementalHFiles.java:275)
        at
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$1.call(LoadIncrementalHFiles.java:273)
        at
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
12/06/24 21:30:52 ERROR mapreduce.LoadIncrementalHFiles: Encountered
unrecoverable error from region server

Is there a way in which I can increase the timeout period?

Thank you,

On Tue, Jun 26, 2012 at 7:05 PM, Andrew Purtell <[email protected]> wrote:

> On Tue, Jun 26, 2012 at 9:56 AM, Sever Fundatureanu
> <[email protected]> wrote:
> > I have to bulkload 6 tables which contain the same information but with a
> > different order to cover all possible access patterns. Would it be a good
> > idea to do only one load and use co-processors to populate the other
> > tables, instead of doing the traditional MR bulkload which would require
> 6
> > separate jobs?
>
> Without knowing more than you've said, it seems better to use MR to
> build all input.
>
> Best regards,
>
>    - Andy
>
> Problems worthy of attack prove their worth by hitting back. - Piet
> Hein (via Tom White)
>



-- 
Sever Fundatureanu

Vrije Universiteit Amsterdam
E-mail: [email protected]

Reply via email to