wForget commented on issue #1796:
URL: 
https://github.com/apache/incubator-kyuubi/issues/1796#issuecomment-1016240678


   > seem that our main method had been invoked, maybe we can build an extra 
RPC layer for engine to report its starting statuses to the server that 
launches it
   
   It may also fail before the mian method is invoked.
   ```
   2022-01-17 06:26:59,603 WARN [ContainerLocalizer Downloader] 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory: I/O error constructing 
remote block reader.
   java.io.IOException: DestHost:destPort XX.XX.XX:8888 , LocalHost:localPort 
XX.XX.XX/XX.XX.XX:0. Failed on local exception: java.io.IOException: 
java.io.InterruptedIOException: Interrupted while waiting for IO on channel 
java.nio.channels.SocketChannel[connected local=/XX.XX.XX:45616 
remote=XX.XX.XX/XX.XX.XX:8888]. Total timeout mills is 120000, 117866 millis 
timeout left.
           at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
Method)
           at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
           at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
           at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
           at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:856)
           at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:831)
           at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1566)
           at org.apache.hadoop.ipc.Client.call(Client.java:1508)
           at org.apache.hadoop.ipc.Client.call(Client.java:1405)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
           at com.sun.proxy.$Proxy10.getServerDefaults(Unknown Source)
           at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getServerDefaults(ClientNamenodeProtocolTranslatorPB.java:341)
           at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
           at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
           at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
           at java.lang.reflect.Method.invoke(Method.java:498)
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
           at com.sun.proxy.$Proxy11.getServerDefaults(Unknown Source)
           at 
org.apache.hadoop.hdfs.DFSClient.getServerDefaults(DFSClient.java:673)
           at 
org.apache.hadoop.hdfs.DFSClient.shouldEncryptData(DFSClient.java:1768)
           at 
org.apache.hadoop.hdfs.DFSClient.newDataEncryptionKey(DFSClient.java:1774)
           at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:244)
           at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:227)
           at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.peerSend(SaslDataTransferClient.java:170)
           at 
org.apache.hadoop.hdfs.DFSUtilClient.peerFromSocketAndKey(DFSUtilClient.java:731)
           at 
org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:2976)
           at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:823)
           at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:748)
           at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:381)
           at 
org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:735)
           at 
org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:666)
           at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:851)
           at 
org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:923)
           at java.io.DataInputStream.read(DataInputStream.java:149)
           at java.io.FilterInputStream.read(FilterInputStream.java:133)
           at java.io.PushbackInputStream.read(PushbackInputStream.java:186)
           at java.util.zip.ZipInputStream.readFully(ZipInputStream.java:403)
           at java.util.zip.ZipInputStream.readLOC(ZipInputStream.java:278)
           at java.util.zip.ZipInputStream.getNextEntry(ZipInputStream.java:122)
           at org.apache.hadoop.fs.FileUtil.unZip(FileUtil.java:632)
           at org.apache.hadoop.yarn.util.FSDownload.unpack(FSDownload.java:339)
           at 
org.apache.hadoop.yarn.util.FSDownload.downloadAndUnpack(FSDownload.java:307)
           at 
org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:287)
           at 
org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:68)
           at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:418)
           at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:415)
           at java.security.AccessController.doPrivileged(Native Method)
           at javax.security.auth.Subject.doAs(Subject.java:422)
           at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
           at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:415)
           at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.doDownloadCall(ContainerLocalizer.java:246)
           at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:239)
           at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:227)
           at java.util.concurrent.FutureTask.run(FutureTask.java:266)
           at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
           at java.util.concurrent.FutureTask.run(FutureTask.java:266)
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:748)
   Caused by: java.io.IOException: java.io.InterruptedIOException: Interrupted 
while waiting for IO on channel java.nio.channels.SocketChannel[connected 
local=/XX.XX.XX:45616 remote=XX.XX.XX/XX.XX.XX:8888]. Total timeout mills is 
120000, 117866 millis timeout left.
           at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:778)
           at java.security.AccessController.doPrivileged(Native Method)
           at javax.security.auth.Subject.doAs(Subject.java:422)
           at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
           at 
org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:732)
           at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:835)
           at 
org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:413)
           at org.apache.hadoop.ipc.Client.getConnection(Client.java:1636)
           at org.apache.hadoop.ipc.Client.call(Client.java:1452)
           ... 56 more
   Caused by: java.io.InterruptedIOException: Interrupted while waiting for IO 
on channel java.nio.channels.SocketChannel[connected local=/XX.XX.XX:45616 
remote=XX.XX.XX/XX.XX.XX:8888]. Total timeout mills is 120000, 117866 millis 
timeout left.
           at 
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:351)
           at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
           at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
           at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
           at java.io.FilterInputStream.read(FilterInputStream.java:133)
           at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
           at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
           at java.io.DataInputStream.readInt(DataInputStream.java:387)
           at 
org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1880)
           at 
org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:365)
           at 
org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:622)
           at 
org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:413)
           at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:822)
           at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:818)
           at java.security.AccessController.doPrivileged(Native Method)
           at javax.security.auth.Subject.doAs(Subject.java:422)
           at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
           at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:818)
           ... 59 more
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to