You can increase the RPC timeout period to keep this from happening. But maybe it makes sense to give our RPC a "keepalive" option for calls that may run for a long time (like execCoprocessor)?
Or fold this into a broader rework into a proper async RPC model. Best regards, - Andy Problems worthy of attack prove their worth by hitting back. - Piet Hein (via Tom White) ----- Original Message ----- > From: Stack <[email protected]> > To: [email protected] > Cc: > Sent: Thursday, March 8, 2012 9:14 AM > Subject: Re: Coprocessor execution with bulk data > > On Wed, Mar 7, 2012 at 10:59 PM, raghavendhra rahul > <[email protected]> wrote: >> 2012-03-08 12:03:09,475 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server >> Responder, call execCoprocessor([B@50cb21, getProjection(), rpc version=1, >> client version=0, methodsFingerPrint=0), rpc version=1, client version=29, >> methodsFingerPrint=54742778 from 10.184.17.26:46472: output error >> 2012-03-08 12:03:09,476 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server >> handler 7 on 60020 caught: java.nio.channels.ClosedChannelException > > > Usually this means the client has gone away perhaps because the > processing was taking longer than the rpctimeout? Does the above > happen every time? > > St.Ack >
