No, there was no internal domain issue. As I mentioned I saw this issue
only on a few nodes on the cluster.

On Thu, Apr 9, 2020 at 10:49 PM Wei Zhang <wezh...@outlook.com> wrote:

> Is there any internal domain name resolving issues?
>
> > Caused by:  java.net.UnknownHostException:
> spark-1586333186571-driver-svc.fractal-segmentation.svc
>
> -z
> ________________________________________
> From: Prudhvi Chennuru (CONT) <prudhvi.chenn...@capitalone.com.INVALID>
> Sent: Friday, April 10, 2020 2:44
> To: user
> Subject: Driver pods stuck in running state indefinitely
>
>
> Hi,
>
>    We are running spark batch jobs on K8s.
>    Kubernetes version: 1.11.5 ,
>    spark version: 2.3.2,
>   docker version: 19.3.8
>
>    Issue: Few Driver pods are stuck in running state indefinitely with
> error
>
>    ```
>    The Initial job has not accepted any resources; check your cluster UI
> to ensure that workers are registered and have sufficient resources.
>    ```
>
> Below is the log of the errored out executor pods
>
>   ```
>    Exception in thread "main"
> java.lang.reflect.UndeclaredThrowableException
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1858)
> at
> org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:63)
> at
> org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:188)
> at
> org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:293)
> at
> org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
> Caused by: org.apache.spark.SparkException: Exception thrown in
> awaitResult:
> at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:205)
> at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
> at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:101)
> at
> org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:201)
> at
> org.apache.spark.deploy.SparkHadoopUtil$$anon$2.run(SparkHadoopUtil.scala:64)
> at
> org.apache.spark.deploy.SparkHadoopUtil$$anon$2.run(SparkHadoopUtil.scala:63)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1840)
> ... 4 more
> Caused by: java.io.IOException: Failed to connect to
> spark-1586333186571-driver-svc.fractal-segmentation.svc:7078
> at
> org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:245)
> at
> org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:187)
> at
> org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:198)
> at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:194)
> at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:190)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.net.UnknownHostException:
> spark-1586333186571-driver-svc.fractal-segmentation.svc
> at java.net.InetAddress.getAllByName0(InetAddress.java:1280)
> at java.net.InetAddress.getAllByName(InetAddress.java:1192)
> at java.net.InetAddress.getAllByName(InetAddress.java:1126)
> at java.net.InetAddress.getByName(InetAddress.java:1076)
> at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:146)
> at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:143)
> at java.security.AccessController.doPrivileged(Native Method)
> at io.netty.util.internal.SocketUtils.addressByName(SocketUtils.java:143)
> at
> io.netty.resolver.DefaultNameResolver.doResolve(DefaultNameResolver.java:43)
> at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:63)
> at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:55)
> at
> io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:57)
> at
> io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:32)
> at
> io.netty.resolver.AbstractAddressResolver.resolve(AbstractAddressResolver.java:108)
> at io.netty.bootstrap.Bootstrap.doResolveAndConnect0(Bootstrap.java:208)
> at io.netty.bootstrap.Bootstrap.access$000(Bootstrap.java:49)
> at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:188)
> at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:174)
> at
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
> at
> io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
> at
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
> at
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
> at
> io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
> at
> io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetSuccess(AbstractChannel.java:978)
> at
> io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:512)
> at
> io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:423)
> at
> io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:482)
> at
> io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
> at
> io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
> at
> io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
> at
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
> ... 1 more
>
> ```
>
> The stuck driver pod and errored executor pods are running on different
> nodes but what i observed is some other driver pods and executor pods ran
> successfully on the nodes on which executor pod errored out.
> When i checked calico pods on those nodes i don't see network related
> errors but i see below error in kube-proxy pods and i also see that service
> is created for the stuck driver pods.
>
> ```
> E0408 06:39:01.649573       1 proxier.go:1306] Failed to execute
> iptables-restore: exit status 1 (iptables-restore: line 988 failed
>     )
>     I0408 06:39:01.649619       1 proxier.go:1308] Closing local ports
> after iptables-restore failure
>
>     ```
>
>
> My understanding was as the services created for drivers are headless,
> connectivity between executor and driver is established by executor
> directly hitting driver pod IP and there is no involvement of kube-proxy in
> routing the request to driver pod.
>
> Is there a way to find the root cause for this issue, if it's not k8s
> related could there be any kernel, docker or mismatch of the versions i am
> using.
>
> The cluster was created 35 days ago and I have been seeing this issue for
> the past 4 days.
>
>
> Appreciate the help.
>
> --
> Thanks,
> Prudhvi Chennuru.
> ________________________________
>
>
> The information contained in this e-mail is confidential and/or
> proprietary to Capital One and/or its affiliates and may only be used
> solely in performance of work or services for Capital One. The information
> transmitted herewith is intended only for use by the individual or entity
> to which it is addressed. If the reader of this message is not the intended
> recipient, you are hereby notified that any review, retransmission,
> dissemination, distribution, copying or other use of, or taking of any
> action in reliance upon this information is strictly prohibited. If you
> have received this communication in error, please contact the sender and
> delete the material from your computer.
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>

-- 
*Thanks,*
*Prudhvi Chennuru.*

______________________________________________________________________



The information contained in this e-mail is confidential and/or proprietary to 
Capital One and/or its affiliates and may only be used solely in performance of 
work or services for Capital One. The information transmitted herewith is 
intended only for use by the individual or entity to which it is addressed. If 
the reader of this message is not the intended recipient, you are hereby 
notified that any review, retransmission, dissemination, distribution, copying 
or other use of, or taking of any action in reliance upon this information is 
strictly prohibited. If you have received this communication in error, please 
contact the sender and delete the material from your computer.



Reply via email to