Re: Results never return to driver | Spark Custom Reader

2015-01-25 Thread Harihar Nahak
ess you manually
> changed the work directory location but in that case I'd assume you know
> where to find the log)
>
> On Thu, Jan 22, 2015 at 10:54 PM, Harihar Nahak 
> wrote:
>
>> Hi All,
>>
>> I wrote a custom reader to read a DB, and it is able to return key and
>> value
>> as expected but after it finished it never returned to driver
>>
>> here is output of worker log :
>> 15/01/23 15:51:38 INFO worker.ExecutorRunner: Launch command: "java" "-cp"
>>
>> "::/usr/local/spark-1.2.0-bin-hadoop2.4/sbin/../conf:/usr/local/spark-1.2.0-bin-hadoop2.4/lib/spark-assembly-1.2.0-hadoop2.4.0.jar:/usr/local/spark-1.2.0-bin-hadoop2.4/lib/datanucleus-rdbms-3.2.9.jar:/usr/local/spark-1.2.0-bin-hadoop2.4/lib/datanucleus-api-jdo-3.2.6.jar:/usr/local/spark-1.2.0-bin-hadoop2.4/lib/datanucleus-core-3.2.10.jar:/usr/local/hadoop/etc/hadoop"
>> "-XX:MaxPermSize=128m" "-Dspark.driver.port=53484" "-Xms1024M" "-Xmx1024M"
>> "org.apache.spark.executor.CoarseGrainedExecutorBackend"
>> "akka.tcp://sparkDriver@VM90:53484/user/CoarseGrainedScheduler" "6"
>> "VM99"
>> "4" "app-20150123155114-"
>> "akka.tcp://sparkWorker@VM99:44826/user/Worker"
>> 15/01/23 15:51:47 INFO worker.Worker: Executor app-20150123155114-/6
>> finished with state EXITED message Command exited with code 1 exitStatus 1
>> 15/01/23 15:51:47 WARN remote.ReliableDeliverySupervisor: Association with
>> remote system [akka.tcp://sparkExecutor@VM99:57695] has failed, address
>> is
>> now gated for [5000] ms. Reason is: [Disassociated].
>> 15/01/23 15:51:47 INFO actor.LocalActorRef: Message
>> [akka.remote.transport.ActorTransportAdapter$DisassociateUnderlying] from
>> Actor[akka://sparkWorker/deadLetters] to
>>
>> Actor[akka://sparkWorker/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FsparkWorker%40143.96.25.29%3A35065-4#-915179653]
>> was not delivered. [3] dead letters encountered. This logging can be
>> turned
>> off or adjusted with configuration settings 'akka.log-dead-letters' and
>> 'akka.log-dead-letters-during-shutdown'.
>> 15/01/23 15:51:49 INFO worker.Worker: Asked to kill unknown executor
>> app-20150123155114-/6
>>
>> If someone noticed any clue to fixed that will really appreciate.
>>
>>
>>
>> -
>> --Harihar
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/Results-never-return-to-driver-Spark-Custom-Reader-tp21328.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>
>


-- 
Regards,
Harihar Nahak
BigData Developer
Wynyard
Email:hna...@wynyardgroup.com | Extn: 8019


Re: Results never return to driver | Spark Custom Reader

2015-01-23 Thread Yana Kadiyska
It looks to me like your executor actually crashed and didn't just finish
properly.

Can you check the executor log?

It is available in the UI, or on the worker machine, under $SPARK_HOME/work/
 app-20150123155114-/6/stderr  (unless you manually changed the work
directory location but in that case I'd assume you know where to find the
log)

On Thu, Jan 22, 2015 at 10:54 PM, Harihar Nahak 
wrote:

> Hi All,
>
> I wrote a custom reader to read a DB, and it is able to return key and
> value
> as expected but after it finished it never returned to driver
>
> here is output of worker log :
> 15/01/23 15:51:38 INFO worker.ExecutorRunner: Launch command: "java" "-cp"
>
> "::/usr/local/spark-1.2.0-bin-hadoop2.4/sbin/../conf:/usr/local/spark-1.2.0-bin-hadoop2.4/lib/spark-assembly-1.2.0-hadoop2.4.0.jar:/usr/local/spark-1.2.0-bin-hadoop2.4/lib/datanucleus-rdbms-3.2.9.jar:/usr/local/spark-1.2.0-bin-hadoop2.4/lib/datanucleus-api-jdo-3.2.6.jar:/usr/local/spark-1.2.0-bin-hadoop2.4/lib/datanucleus-core-3.2.10.jar:/usr/local/hadoop/etc/hadoop"
> "-XX:MaxPermSize=128m" "-Dspark.driver.port=53484" "-Xms1024M" "-Xmx1024M"
> "org.apache.spark.executor.CoarseGrainedExecutorBackend"
> "akka.tcp://sparkDriver@VM90:53484/user/CoarseGrainedScheduler" "6" "VM99"
> "4" "app-20150123155114-"
> "akka.tcp://sparkWorker@VM99:44826/user/Worker"
> 15/01/23 15:51:47 INFO worker.Worker: Executor app-20150123155114-/6
> finished with state EXITED message Command exited with code 1 exitStatus 1
> 15/01/23 15:51:47 WARN remote.ReliableDeliverySupervisor: Association with
> remote system [akka.tcp://sparkExecutor@VM99:57695] has failed, address is
> now gated for [5000] ms. Reason is: [Disassociated].
> 15/01/23 15:51:47 INFO actor.LocalActorRef: Message
> [akka.remote.transport.ActorTransportAdapter$DisassociateUnderlying] from
> Actor[akka://sparkWorker/deadLetters] to
>
> Actor[akka://sparkWorker/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FsparkWorker%40143.96.25.29%3A35065-4#-915179653]
> was not delivered. [3] dead letters encountered. This logging can be turned
> off or adjusted with configuration settings 'akka.log-dead-letters' and
> 'akka.log-dead-letters-during-shutdown'.
> 15/01/23 15:51:49 INFO worker.Worker: Asked to kill unknown executor
> app-20150123155114-/6
>
> If someone noticed any clue to fixed that will really appreciate.
>
>
>
> -
> --Harihar
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Results-never-return-to-driver-Spark-Custom-Reader-tp21328.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>


Results never return to driver | Spark Custom Reader

2015-01-22 Thread Harihar Nahak
Hi All, 

I wrote a custom reader to read a DB, and it is able to return key and value
as expected but after it finished it never returned to driver 

here is output of worker log : 
15/01/23 15:51:38 INFO worker.ExecutorRunner: Launch command: "java" "-cp"
"::/usr/local/spark-1.2.0-bin-hadoop2.4/sbin/../conf:/usr/local/spark-1.2.0-bin-hadoop2.4/lib/spark-assembly-1.2.0-hadoop2.4.0.jar:/usr/local/spark-1.2.0-bin-hadoop2.4/lib/datanucleus-rdbms-3.2.9.jar:/usr/local/spark-1.2.0-bin-hadoop2.4/lib/datanucleus-api-jdo-3.2.6.jar:/usr/local/spark-1.2.0-bin-hadoop2.4/lib/datanucleus-core-3.2.10.jar:/usr/local/hadoop/etc/hadoop"
"-XX:MaxPermSize=128m" "-Dspark.driver.port=53484" "-Xms1024M" "-Xmx1024M"
"org.apache.spark.executor.CoarseGrainedExecutorBackend"
"akka.tcp://sparkDriver@VM90:53484/user/CoarseGrainedScheduler" "6" "VM99"
"4" "app-20150123155114-"
"akka.tcp://sparkWorker@VM99:44826/user/Worker"
15/01/23 15:51:47 INFO worker.Worker: Executor app-20150123155114-/6
finished with state EXITED message Command exited with code 1 exitStatus 1
15/01/23 15:51:47 WARN remote.ReliableDeliverySupervisor: Association with
remote system [akka.tcp://sparkExecutor@VM99:57695] has failed, address is
now gated for [5000] ms. Reason is: [Disassociated].
15/01/23 15:51:47 INFO actor.LocalActorRef: Message
[akka.remote.transport.ActorTransportAdapter$DisassociateUnderlying] from
Actor[akka://sparkWorker/deadLetters] to
Actor[akka://sparkWorker/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FsparkWorker%40143.96.25.29%3A35065-4#-915179653]
was not delivered. [3] dead letters encountered. This logging can be turned
off or adjusted with configuration settings 'akka.log-dead-letters' and
'akka.log-dead-letters-during-shutdown'.
15/01/23 15:51:49 INFO worker.Worker: Asked to kill unknown executor
app-20150123155114-/6

If someone noticed any clue to fixed that will really appreciate. 



-
--Harihar
--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Results-never-return-to-driver-Spark-Custom-Reader-tp21328.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org