Re: Issue with executer

2021-01-20 Thread Vikas Garg
The issue is resolved. Resolution is little weird but it worked. It was due
to scala version mismatch with projects in my package.

On Wed, 20 Jan 2021 at 18:07, Mich Talebzadeh 
wrote:

> Hi Vikas,
>
> Are you running this on your local laptop etc or using some IDE etc?
>
> What is your available memory for Spark?
>
> Start with minimum set like below
>
> def spark_session_local(appName):
> return SparkSession.builder \
> .master('local[1]') \
> .appName(appName) \
> .enableHiveSupport() \
> .getOrCreate()
>
>
> HTH
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Wed, 20 Jan 2021 at 12:32, Vikas Garg  wrote:
>
>> Hi Sachit,
>>
>> I am running it in local. My IP mentioned is the private IP address and
>> therefore it is useless for anyone.
>>
>> On Wed, 20 Jan 2021 at 17:37, Sachit Murarka 
>> wrote:
>>
>>> Hi Vikas
>>>
>>> 1. Are you running in local mode? Master has local[*]
>>> 2. Pls mask the ip or confidential info while sharing logs
>>>
>>> Thanks
>>> Sachit
>>>
>>> On Wed, 20 Jan 2021, 17:35 Vikas Garg,  wrote:
>>>
 Hi,

 I am facing issue with spark executor. I am struggling with this issue
 since last many days and unable to resolve the issue.

 Below is the configuration I have given.

   val spark = SparkSession.builder()
 .appName("Spark Job")
 .master("local[*]")
 .config("spark.dynamicAllocation.enabled", true)
 .config("spark.shuffle.service.enabled", true)
 .config("spark.driver.maxResultSize", "8g")
 .config("spark.driver.memory", "8g")
 .config("spark.executor.memory", "8g")
 .config("spark.network.timeout", "3600s")
 .getOrCreate()

 1/01/20 17:06:57 ERROR RetryingBlockFetcher: Exception while beginning
 fetch of 1 outstanding blocks

 *java.io.IOException*: Failed to connect to
 del1-lhp-n9.synapse.com/192.168.166.213:51348

 at
 org.apache.spark.network.client.TransportClientFactory.createClient(
 *TransportClientFactory.java:253*)

 at
 org.apache.spark.network.client.TransportClientFactory.createClient(
 *TransportClientFactory.java:195*)

 at
 org.apache.spark.network.netty.NettyBlockTransferService$$anon$2.createAndStart(
 *NettyBlockTransferService.scala:122*)

 at
 org.apache.spark.network.shuffle.RetryingBlockFetcher.fetchAllOutstanding(
 *RetryingBlockFetcher.java:141*)

 at
 org.apache.spark.network.shuffle.RetryingBlockFetcher.start(
 *RetryingBlockFetcher.java:121*)

 at
 org.apache.spark.network.netty.NettyBlockTransferService.fetchBlocks(
 *NettyBlockTransferService.scala:143*)

 at
 org.apache.spark.network.BlockTransferService.fetchBlockSync(
 *BlockTransferService.scala:103*)

 at
 org.apache.spark.storage.BlockManager.fetchRemoteManagedBuffer(
 *BlockManager.scala:1010*)

 at
 org.apache.spark.storage.BlockManager.$anonfun$getRemoteBlock$8(
 *BlockManager.scala:954*)

 at scala.Option.orElse(*Option.scala:289*)

 at org.apache.spark.storage.BlockManager.getRemoteBlock(
 *BlockManager.scala:954*)

 at org.apache.spark.storage.BlockManager.getRemoteBytes(
 *BlockManager.scala:1092*)

 at
 org.apache.spark.scheduler.TaskResultGetter$$anon$3.$anonfun$run$1(
 *TaskResultGetter.scala:88*)

 at
 scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)

 at org.apache.spark.util.Utils$.logUncaughtExceptions(
 *Utils.scala:1932*)

 at org.apache.spark.scheduler.TaskResultGetter$$anon$3.run(
 *TaskResultGetter.scala:63*)

 at java.util.concurrent.ThreadPoolExecutor.runWorker(
 *ThreadPoolExecutor.java:1149*)

 at java.util.concurrent.ThreadPoolExecutor$Worker.run(
 *ThreadPoolExecutor.java:624*)

 at java.lang.Thread.run(*Thread.java:748*)

 Caused by: *io.netty.channel.AbstractChannel$AnnotatedSocketException*:
 Permission denied: no further information:
 del1-lhp-n9.synapse.com/192.168.166.213:51348

 Caused by: *java.net.SocketException*: Permission denied: no further
 information

   

Re: Issue with executer

2021-01-20 Thread Mich Talebzadeh
Hi Vikas,

Are you running this on your local laptop etc or using some IDE etc?

What is your available memory for Spark?

Start with minimum set like below

def spark_session_local(appName):
return SparkSession.builder \
.master('local[1]') \
.appName(appName) \
.enableHiveSupport() \
.getOrCreate()


HTH



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*





*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Wed, 20 Jan 2021 at 12:32, Vikas Garg  wrote:

> Hi Sachit,
>
> I am running it in local. My IP mentioned is the private IP address and
> therefore it is useless for anyone.
>
> On Wed, 20 Jan 2021 at 17:37, Sachit Murarka 
> wrote:
>
>> Hi Vikas
>>
>> 1. Are you running in local mode? Master has local[*]
>> 2. Pls mask the ip or confidential info while sharing logs
>>
>> Thanks
>> Sachit
>>
>> On Wed, 20 Jan 2021, 17:35 Vikas Garg,  wrote:
>>
>>> Hi,
>>>
>>> I am facing issue with spark executor. I am struggling with this issue
>>> since last many days and unable to resolve the issue.
>>>
>>> Below is the configuration I have given.
>>>
>>>   val spark = SparkSession.builder()
>>> .appName("Spark Job")
>>> .master("local[*]")
>>> .config("spark.dynamicAllocation.enabled", true)
>>> .config("spark.shuffle.service.enabled", true)
>>> .config("spark.driver.maxResultSize", "8g")
>>> .config("spark.driver.memory", "8g")
>>> .config("spark.executor.memory", "8g")
>>> .config("spark.network.timeout", "3600s")
>>> .getOrCreate()
>>>
>>> 1/01/20 17:06:57 ERROR RetryingBlockFetcher: Exception while beginning
>>> fetch of 1 outstanding blocks
>>>
>>> *java.io.IOException*: Failed to connect to
>>> del1-lhp-n9.synapse.com/192.168.166.213:51348
>>>
>>> at
>>> org.apache.spark.network.client.TransportClientFactory.createClient(
>>> *TransportClientFactory.java:253*)
>>>
>>> at
>>> org.apache.spark.network.client.TransportClientFactory.createClient(
>>> *TransportClientFactory.java:195*)
>>>
>>> at
>>> org.apache.spark.network.netty.NettyBlockTransferService$$anon$2.createAndStart(
>>> *NettyBlockTransferService.scala:122*)
>>>
>>> at
>>> org.apache.spark.network.shuffle.RetryingBlockFetcher.fetchAllOutstanding(
>>> *RetryingBlockFetcher.java:141*)
>>>
>>> at
>>> org.apache.spark.network.shuffle.RetryingBlockFetcher.start(
>>> *RetryingBlockFetcher.java:121*)
>>>
>>> at
>>> org.apache.spark.network.netty.NettyBlockTransferService.fetchBlocks(
>>> *NettyBlockTransferService.scala:143*)
>>>
>>> at
>>> org.apache.spark.network.BlockTransferService.fetchBlockSync(
>>> *BlockTransferService.scala:103*)
>>>
>>> at
>>> org.apache.spark.storage.BlockManager.fetchRemoteManagedBuffer(
>>> *BlockManager.scala:1010*)
>>>
>>> at
>>> org.apache.spark.storage.BlockManager.$anonfun$getRemoteBlock$8(
>>> *BlockManager.scala:954*)
>>>
>>> at scala.Option.orElse(*Option.scala:289*)
>>>
>>> at org.apache.spark.storage.BlockManager.getRemoteBlock(
>>> *BlockManager.scala:954*)
>>>
>>> at org.apache.spark.storage.BlockManager.getRemoteBytes(
>>> *BlockManager.scala:1092*)
>>>
>>> at
>>> org.apache.spark.scheduler.TaskResultGetter$$anon$3.$anonfun$run$1(
>>> *TaskResultGetter.scala:88*)
>>>
>>> at
>>> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
>>>
>>> at org.apache.spark.util.Utils$.logUncaughtExceptions(
>>> *Utils.scala:1932*)
>>>
>>> at org.apache.spark.scheduler.TaskResultGetter$$anon$3.run(
>>> *TaskResultGetter.scala:63*)
>>>
>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(
>>> *ThreadPoolExecutor.java:1149*)
>>>
>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
>>> *ThreadPoolExecutor.java:624*)
>>>
>>> at java.lang.Thread.run(*Thread.java:748*)
>>>
>>> Caused by: *io.netty.channel.AbstractChannel$AnnotatedSocketException*:
>>> Permission denied: no further information:
>>> del1-lhp-n9.synapse.com/192.168.166.213:51348
>>>
>>> Caused by: *java.net.SocketException*: Permission denied: no further
>>> information
>>>
>>> at sun.nio.ch.SocketChannelImpl.checkConnect(*Native Method*
>>> )
>>>
>>> at sun.nio.ch.SocketChannelImpl.finishConnect(
>>> *SocketChannelImpl.java:715*)
>>>
>>> at
>>> io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(
>>> *NioSocketChannel.java:330*)
>>>
>>> at
>>> io.netty.channel.nio.AbstractNioCh

Re: Issue with executer

2021-01-20 Thread Vikas Garg
Hi Sachit,

I am running it in local. My IP mentioned is the private IP address and
therefore it is useless for anyone.

On Wed, 20 Jan 2021 at 17:37, Sachit Murarka 
wrote:

> Hi Vikas
>
> 1. Are you running in local mode? Master has local[*]
> 2. Pls mask the ip or confidential info while sharing logs
>
> Thanks
> Sachit
>
> On Wed, 20 Jan 2021, 17:35 Vikas Garg,  wrote:
>
>> Hi,
>>
>> I am facing issue with spark executor. I am struggling with this issue
>> since last many days and unable to resolve the issue.
>>
>> Below is the configuration I have given.
>>
>>   val spark = SparkSession.builder()
>> .appName("Spark Job")
>> .master("local[*]")
>> .config("spark.dynamicAllocation.enabled", true)
>> .config("spark.shuffle.service.enabled", true)
>> .config("spark.driver.maxResultSize", "8g")
>> .config("spark.driver.memory", "8g")
>> .config("spark.executor.memory", "8g")
>> .config("spark.network.timeout", "3600s")
>> .getOrCreate()
>>
>> 1/01/20 17:06:57 ERROR RetryingBlockFetcher: Exception while beginning
>> fetch of 1 outstanding blocks
>>
>> *java.io.IOException*: Failed to connect to
>> del1-lhp-n9.synapse.com/192.168.166.213:51348
>>
>> at
>> org.apache.spark.network.client.TransportClientFactory.createClient(
>> *TransportClientFactory.java:253*)
>>
>> at
>> org.apache.spark.network.client.TransportClientFactory.createClient(
>> *TransportClientFactory.java:195*)
>>
>> at
>> org.apache.spark.network.netty.NettyBlockTransferService$$anon$2.createAndStart(
>> *NettyBlockTransferService.scala:122*)
>>
>> at
>> org.apache.spark.network.shuffle.RetryingBlockFetcher.fetchAllOutstanding(
>> *RetryingBlockFetcher.java:141*)
>>
>> at
>> org.apache.spark.network.shuffle.RetryingBlockFetcher.start(
>> *RetryingBlockFetcher.java:121*)
>>
>> at
>> org.apache.spark.network.netty.NettyBlockTransferService.fetchBlocks(
>> *NettyBlockTransferService.scala:143*)
>>
>> at
>> org.apache.spark.network.BlockTransferService.fetchBlockSync(
>> *BlockTransferService.scala:103*)
>>
>> at
>> org.apache.spark.storage.BlockManager.fetchRemoteManagedBuffer(
>> *BlockManager.scala:1010*)
>>
>> at
>> org.apache.spark.storage.BlockManager.$anonfun$getRemoteBlock$8(
>> *BlockManager.scala:954*)
>>
>> at scala.Option.orElse(*Option.scala:289*)
>>
>> at org.apache.spark.storage.BlockManager.getRemoteBlock(
>> *BlockManager.scala:954*)
>>
>> at org.apache.spark.storage.BlockManager.getRemoteBytes(
>> *BlockManager.scala:1092*)
>>
>> at
>> org.apache.spark.scheduler.TaskResultGetter$$anon$3.$anonfun$run$1(
>> *TaskResultGetter.scala:88*)
>>
>> at
>> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
>>
>> at org.apache.spark.util.Utils$.logUncaughtExceptions(
>> *Utils.scala:1932*)
>>
>> at org.apache.spark.scheduler.TaskResultGetter$$anon$3.run(
>> *TaskResultGetter.scala:63*)
>>
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(
>> *ThreadPoolExecutor.java:1149*)
>>
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
>> *ThreadPoolExecutor.java:624*)
>>
>> at java.lang.Thread.run(*Thread.java:748*)
>>
>> Caused by: *io.netty.channel.AbstractChannel$AnnotatedSocketException*:
>> Permission denied: no further information:
>> del1-lhp-n9.synapse.com/192.168.166.213:51348
>>
>> Caused by: *java.net.SocketException*: Permission denied: no further
>> information
>>
>> at sun.nio.ch.SocketChannelImpl.checkConnect(*Native Method*)
>>
>> at sun.nio.ch.SocketChannelImpl.finishConnect(
>> *SocketChannelImpl.java:715*)
>>
>> at
>> io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(
>> *NioSocketChannel.java:330*)
>>
>> at
>> io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(
>> *AbstractNioChannel.java:334*)
>>
>> at io.netty.channel.nio.NioEventLoop.processSelectedKey(
>> *NioEventLoop.java:702*)
>>
>> at
>> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(
>> *NioEventLoop.java:650*)
>>
>> at io.netty.channel.nio.NioEventLoop.processSelectedKeys(
>> *NioEventLoop.java:576*)
>>
>> at io.netty.channel.nio.NioEventLoop.run(
>> *NioEventLoop.java:493*)
>>
>> at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(
>> *SingleThreadEventExecutor.java:989*)
>>
>> at io.netty.util.internal.ThreadExecutorMap$2.run(
>> *ThreadExecutorMap.java:74*)
>>
>> at io.netty.util.concurrent.FastThreadLocalRunnable.run(
>> *FastThreadLocalRunnable.java:30*)
>>
>> at java.lang.Thread.run(*Thread.java:748*)
>>
>> 21/01/20 17:06:57 ERROR RetryingBlockFetcher: Exception while beginning
>> fetch of 1 outstanding blocks
>>
>> *java.io.IOException*: Failed to connect to
>> del1-

Re: Issue with executer

2021-01-20 Thread Sachit Murarka
Hi Vikas

1. Are you running in local mode? Master has local[*]
2. Pls mask the ip or confidential info while sharing logs

Thanks
Sachit

On Wed, 20 Jan 2021, 17:35 Vikas Garg,  wrote:

> Hi,
>
> I am facing issue with spark executor. I am struggling with this issue
> since last many days and unable to resolve the issue.
>
> Below is the configuration I have given.
>
>   val spark = SparkSession.builder()
> .appName("Spark Job")
> .master("local[*]")
> .config("spark.dynamicAllocation.enabled", true)
> .config("spark.shuffle.service.enabled", true)
> .config("spark.driver.maxResultSize", "8g")
> .config("spark.driver.memory", "8g")
> .config("spark.executor.memory", "8g")
> .config("spark.network.timeout", "3600s")
> .getOrCreate()
>
> 1/01/20 17:06:57 ERROR RetryingBlockFetcher: Exception while beginning
> fetch of 1 outstanding blocks
>
> *java.io.IOException*: Failed to connect to
> del1-lhp-n9.synapse.com/192.168.166.213:51348
>
> at
> org.apache.spark.network.client.TransportClientFactory.createClient(
> *TransportClientFactory.java:253*)
>
> at
> org.apache.spark.network.client.TransportClientFactory.createClient(
> *TransportClientFactory.java:195*)
>
> at
> org.apache.spark.network.netty.NettyBlockTransferService$$anon$2.createAndStart(
> *NettyBlockTransferService.scala:122*)
>
> at
> org.apache.spark.network.shuffle.RetryingBlockFetcher.fetchAllOutstanding(
> *RetryingBlockFetcher.java:141*)
>
> at org.apache.spark.network.shuffle.RetryingBlockFetcher.start(
> *RetryingBlockFetcher.java:121*)
>
> at
> org.apache.spark.network.netty.NettyBlockTransferService.fetchBlocks(
> *NettyBlockTransferService.scala:143*)
>
> at
> org.apache.spark.network.BlockTransferService.fetchBlockSync(
> *BlockTransferService.scala:103*)
>
> at
> org.apache.spark.storage.BlockManager.fetchRemoteManagedBuffer(
> *BlockManager.scala:1010*)
>
> at
> org.apache.spark.storage.BlockManager.$anonfun$getRemoteBlock$8(
> *BlockManager.scala:954*)
>
> at scala.Option.orElse(*Option.scala:289*)
>
> at org.apache.spark.storage.BlockManager.getRemoteBlock(
> *BlockManager.scala:954*)
>
> at org.apache.spark.storage.BlockManager.getRemoteBytes(
> *BlockManager.scala:1092*)
>
> at
> org.apache.spark.scheduler.TaskResultGetter$$anon$3.$anonfun$run$1(
> *TaskResultGetter.scala:88*)
>
> at
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
>
> at org.apache.spark.util.Utils$.logUncaughtExceptions(
> *Utils.scala:1932*)
>
> at org.apache.spark.scheduler.TaskResultGetter$$anon$3.run(
> *TaskResultGetter.scala:63*)
>
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> *ThreadPoolExecutor.java:1149*)
>
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> *ThreadPoolExecutor.java:624*)
>
> at java.lang.Thread.run(*Thread.java:748*)
>
> Caused by: *io.netty.channel.AbstractChannel$AnnotatedSocketException*:
> Permission denied: no further information:
> del1-lhp-n9.synapse.com/192.168.166.213:51348
>
> Caused by: *java.net.SocketException*: Permission denied: no further
> information
>
> at sun.nio.ch.SocketChannelImpl.checkConnect(*Native Method*)
>
> at sun.nio.ch.SocketChannelImpl.finishConnect(
> *SocketChannelImpl.java:715*)
>
> at
> io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(
> *NioSocketChannel.java:330*)
>
> at
> io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(
> *AbstractNioChannel.java:334*)
>
> at io.netty.channel.nio.NioEventLoop.processSelectedKey(
> *NioEventLoop.java:702*)
>
> at
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(
> *NioEventLoop.java:650*)
>
> at io.netty.channel.nio.NioEventLoop.processSelectedKeys(
> *NioEventLoop.java:576*)
>
> at io.netty.channel.nio.NioEventLoop.run(
> *NioEventLoop.java:493*)
>
> at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(
> *SingleThreadEventExecutor.java:989*)
>
> at io.netty.util.internal.ThreadExecutorMap$2.run(
> *ThreadExecutorMap.java:74*)
>
> at io.netty.util.concurrent.FastThreadLocalRunnable.run(
> *FastThreadLocalRunnable.java:30*)
>
> at java.lang.Thread.run(*Thread.java:748*)
>
> 21/01/20 17:06:57 ERROR RetryingBlockFetcher: Exception while beginning
> fetch of 1 outstanding blocks
>
> *java.io.IOException*: Failed to connect to
> del1-lhp-n9.synapse.com/192.168.166.213:51348
>
> at
> org.apache.spark.network.client.TransportClientFactory.createClient(
> *TransportClientFactory.java:253*)
>
> at
> org.apache.spark.network.client.TransportClientFactory.createClient(
> *TransportClientFactory.java:195*)
>
> at
> org.apache.