[jira] [Commented] (SPARK-5594) SparkException: Failed to get broadcast (TorrentBroadcast)

2015-08-13 Thread Kaveen Raajan (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695107#comment-14695107
 ] 

Kaveen Raajan commented on SPARK-5594:
--


Removing *spark.cleaner.ttl* configuration is working fine I think. But my 
spark-Thrift Server are going to run always 24/7.
In Spark document they give description for this property [Spark 
Conf-1.4.1|http://spark.apache.org/docs/1.4.1/configuration.html#execution-behavior]
{panel}
Periodic cleanups will ensure that metadata older than this duration will be 
forgotten. This is useful for running Spark for many hours / days (for example, 
running 24/7 in case of Spark Streaming applications). Note that any RDD that 
persists in memory for more than this duration will be cleared as well.
{panel}

I think removing this configuration will make problem, if more dump?


> SparkException: Failed to get broadcast (TorrentBroadcast)
> --
>
> Key: SPARK-5594
> URL: https://issues.apache.org/jira/browse/SPARK-5594
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.2.0, 1.3.0
>Reporter: John Sandiford
>Priority: Critical
>
> I am uncertain whether this is a bug, however I am getting the error below 
> when running on a cluster (works locally), and have no idea what is causing 
> it, or where to look for more information.
> Any help is appreciated.  Others appear to experience the same issue, but I 
> have not found any solutions online.
> Please note that this only happens with certain code and is repeatable, all 
> my other spark jobs work fine.
> {noformat}
> ERROR TaskSetManager: Task 3 in stage 6.0 failed 4 times; aborting job
> Exception in thread "main" org.apache.spark.SparkException: Job aborted due 
> to stage failure: Task 3 in stage 6.0 failed 4 times, most recent failure: 
> Lost task 3.3 in stage 6.0 (TID 24, ): java.io.IOException: 
> org.apache.spark.SparkException: Failed to get broadcast_6_piece0 of 
> broadcast_6
> at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1011)
> at 
> org.apache.spark.broadcast.TorrentBroadcast.readBroadcastBlock(TorrentBroadcast.scala:164)
> at 
> org.apache.spark.broadcast.TorrentBroadcast._value$lzycompute(TorrentBroadcast.scala:64)
> at 
> org.apache.spark.broadcast.TorrentBroadcast._value(TorrentBroadcast.scala:64)
> at 
> org.apache.spark.broadcast.TorrentBroadcast.getValue(TorrentBroadcast.scala:87)
> at org.apache.spark.broadcast.Broadcast.value(Broadcast.scala:70)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:58)
> at org.apache.spark.scheduler.Task.run(Task.scala:56)
> at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
> Caused by: org.apache.spark.SparkException: Failed to get broadcast_6_piece0 
> of broadcast_6
> at 
> org.apache.spark.broadcast.TorrentBroadcast$$anonfun$org$apache$spark$broadcast$TorrentBroadcast$$readBlocks$1$$anonfun$2.apply(TorrentBroadcast.scala:137)
> at 
> org.apache.spark.broadcast.TorrentBroadcast$$anonfun$org$apache$spark$broadcast$TorrentBroadcast$$readBlocks$1$$anonfun$2.apply(TorrentBroadcast.scala:137)
> at scala.Option.getOrElse(Option.scala:120)
> at 
> org.apache.spark.broadcast.TorrentBroadcast$$anonfun$org$apache$spark$broadcast$TorrentBroadcast$$readBlocks$1.apply$mcVI$sp(TorrentBroadcast.scala:136)
> at 
> org.apache.spark.broadcast.TorrentBroadcast$$anonfun$org$apache$spark$broadcast$TorrentBroadcast$$readBlocks$1.apply(TorrentBroadcast.scala:119)
> at 
> org.apache.spark.broadcast.TorrentBroadcast$$anonfun$org$apache$spark$broadcast$TorrentBroadcast$$readBlocks$1.apply(TorrentBroadcast.scala:119)
> at scala.collection.immutable.List.foreach(List.scala:318)
> at 
> org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$readBlocks(TorrentBroadcast.scala:119)
> at 
> org.apache.spark.broadcast.TorrentBroadcast$$anonfun$readBroadcastBlock$1.apply(TorrentBroadcast.scala:174)
> at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1008)
> ... 11 more
> {noformat}
> Driver stacktrace:
> {noformat}
> at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1214)
> at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1203)
> at 
> org.apache.spark.scheduler.DAGScheduler$$anon

[jira] [Comment Edited] (SPARK-5594) SparkException: Failed to get broadcast (TorrentBroadcast)

2015-08-12 Thread Kaveen Raajan (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14694780#comment-14694780
 ] 

Kaveen Raajan edited comment on SPARK-5594 at 8/13/15 6:31 AM:
---

Hi
I'm also facing such error, when two thread of job are running simultaneously 
each in its own SparkContext.

we testing with spark-ThristServer, by submitting two similar request from 
different thread simultaneously 
we clean the SparkContext every 2mins.
spark.cleaner.ttl 2000

Below query are working for first few try, but after some time it throws 
broadcast error. We can't find the exact replication procedure for this error.
{panel:title=Query tried}
SELECT column1  SUM(column2) FROM TableName  JOIN ( SELECT column1,  COUNT(1),  
 SUM(column2)  FROM TableName GROUP BY column1  ORDER BY column3 DESC LIMIT 5) 
t0 ON (column1 = t0.column1) GROUP BY column1
{panel}

Error Detail:
{code}
15/08/12 12:13:58 ERROR thriftserver.SparkExecuteStatementOperation: Error 
executing query:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in 
stage 300.0 failed 4 times, most recent failure: Lost task 1.3 in stage 300.0 
(TID15134, sparkHeadNode.root.lan): java.io.IOException: 
org.apache.spark.SparkException: Failed to get broadcast_2_piece0 of broadcast_2
at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1257)
at 
org.apache.spark.broadcast.TorrentBroadcast.readBroadcastBlock(TorrentBroadcast.scala:165)
at 
org.apache.spark.broadcast.TorrentBroadcast._value$lzycompute(TorrentBroadcast.scala:64)
{code}


was (Author: kaveenbigdata):
Hi
I'm also facing such error, when two thread of job are running simultaneously 
each in its own SparkContext.

I testing with spark-ThristServer, by submitting two similar request from 
different thread simultaneously 
I clean the SparkContext every 2mins.
spark.cleaner.ttl 2000
{panel:title=Query tried}
SELECT column1  SUM(column2) FROM TableName  JOIN ( SELECT column1,  COUNT(1),  
 SUM(column2)  FROM TableName GROUP BY column1  ORDER BY column3 DESC LIMIT 5) 
t0 ON (column1 = t0.column1) GROUP BY column1
{panel}

Error Detail:
{code}
15/08/12 12:13:58 ERROR thriftserver.SparkExecuteStatementOperation: Error 
executing query:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in 
stage 300.0 failed 4 times, most recent failure: Lost task 1.3 in stage 300.0 
(TID15134, sparkHeadNode.root.lan): java.io.IOException: 
org.apache.spark.SparkException: Failed to get broadcast_2_piece0 of broadcast_2
at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1257)
at 
org.apache.spark.broadcast.TorrentBroadcast.readBroadcastBlock(TorrentBroadcast.scala:165)
at 
org.apache.spark.broadcast.TorrentBroadcast._value$lzycompute(TorrentBroadcast.scala:64)
{code}

> SparkException: Failed to get broadcast (TorrentBroadcast)
> --
>
> Key: SPARK-5594
> URL: https://issues.apache.org/jira/browse/SPARK-5594
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.2.0, 1.3.0
>Reporter: John Sandiford
>Priority: Critical
>
> I am uncertain whether this is a bug, however I am getting the error below 
> when running on a cluster (works locally), and have no idea what is causing 
> it, or where to look for more information.
> Any help is appreciated.  Others appear to experience the same issue, but I 
> have not found any solutions online.
> Please note that this only happens with certain code and is repeatable, all 
> my other spark jobs work fine.
> {noformat}
> ERROR TaskSetManager: Task 3 in stage 6.0 failed 4 times; aborting job
> Exception in thread "main" org.apache.spark.SparkException: Job aborted due 
> to stage failure: Task 3 in stage 6.0 failed 4 times, most recent failure: 
> Lost task 3.3 in stage 6.0 (TID 24, ): java.io.IOException: 
> org.apache.spark.SparkException: Failed to get broadcast_6_piece0 of 
> broadcast_6
> at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1011)
> at 
> org.apache.spark.broadcast.TorrentBroadcast.readBroadcastBlock(TorrentBroadcast.scala:164)
> at 
> org.apache.spark.broadcast.TorrentBroadcast._value$lzycompute(TorrentBroadcast.scala:64)
> at 
> org.apache.spark.broadcast.TorrentBroadcast._value(TorrentBroadcast.scala:64)
> at 
> org.apache.spark.broadcast.TorrentBroadcast.getValue(TorrentBroadcast.scala:87)
> at org.apache.spark.broadcast.Broadcast.value(Broadcast.scala:70)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:58)
> at org.apache.spark.scheduler.Task.run(Task.scala:56)
> at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
> at 
> java.util.concu

[jira] [Commented] (SPARK-5594) SparkException: Failed to get broadcast (TorrentBroadcast)

2015-08-12 Thread Kaveen Raajan (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14694780#comment-14694780
 ] 

Kaveen Raajan commented on SPARK-5594:
--

Hi
I'm also facing such error, when two thread of job are running simultaneously 
each in its own SparkContext.

I testing with spark-ThristServer, by submitting two similar request from 
different thread simultaneously 
I clean the SparkContext every 2mins.
spark.cleaner.ttl 2000
{panel:title=Query tried}
SELECT column1  SUM(column2) FROM TableName  JOIN ( SELECT column1,  COUNT(1),  
 SUM(column2)  FROM TableName GROUP BY column1  ORDER BY column3 DESC LIMIT 5) 
t0 ON (column1 = t0.column1) GROUP BY column1
{panel}

Error Detail:
{code}
15/08/12 12:13:58 ERROR thriftserver.SparkExecuteStatementOperation: Error 
executing query:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in 
stage 300.0 failed 4 times, most recent failure: Lost task 1.3 in stage 300.0 
(TID15134, sparkHeadNode.root.lan): java.io.IOException: 
org.apache.spark.SparkException: Failed to get broadcast_2_piece0 of broadcast_2
at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1257)
at 
org.apache.spark.broadcast.TorrentBroadcast.readBroadcastBlock(TorrentBroadcast.scala:165)
at 
org.apache.spark.broadcast.TorrentBroadcast._value$lzycompute(TorrentBroadcast.scala:64)
{code}

> SparkException: Failed to get broadcast (TorrentBroadcast)
> --
>
> Key: SPARK-5594
> URL: https://issues.apache.org/jira/browse/SPARK-5594
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.2.0, 1.3.0
>Reporter: John Sandiford
>Priority: Critical
>
> I am uncertain whether this is a bug, however I am getting the error below 
> when running on a cluster (works locally), and have no idea what is causing 
> it, or where to look for more information.
> Any help is appreciated.  Others appear to experience the same issue, but I 
> have not found any solutions online.
> Please note that this only happens with certain code and is repeatable, all 
> my other spark jobs work fine.
> {noformat}
> ERROR TaskSetManager: Task 3 in stage 6.0 failed 4 times; aborting job
> Exception in thread "main" org.apache.spark.SparkException: Job aborted due 
> to stage failure: Task 3 in stage 6.0 failed 4 times, most recent failure: 
> Lost task 3.3 in stage 6.0 (TID 24, ): java.io.IOException: 
> org.apache.spark.SparkException: Failed to get broadcast_6_piece0 of 
> broadcast_6
> at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1011)
> at 
> org.apache.spark.broadcast.TorrentBroadcast.readBroadcastBlock(TorrentBroadcast.scala:164)
> at 
> org.apache.spark.broadcast.TorrentBroadcast._value$lzycompute(TorrentBroadcast.scala:64)
> at 
> org.apache.spark.broadcast.TorrentBroadcast._value(TorrentBroadcast.scala:64)
> at 
> org.apache.spark.broadcast.TorrentBroadcast.getValue(TorrentBroadcast.scala:87)
> at org.apache.spark.broadcast.Broadcast.value(Broadcast.scala:70)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:58)
> at org.apache.spark.scheduler.Task.run(Task.scala:56)
> at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
> Caused by: org.apache.spark.SparkException: Failed to get broadcast_6_piece0 
> of broadcast_6
> at 
> org.apache.spark.broadcast.TorrentBroadcast$$anonfun$org$apache$spark$broadcast$TorrentBroadcast$$readBlocks$1$$anonfun$2.apply(TorrentBroadcast.scala:137)
> at 
> org.apache.spark.broadcast.TorrentBroadcast$$anonfun$org$apache$spark$broadcast$TorrentBroadcast$$readBlocks$1$$anonfun$2.apply(TorrentBroadcast.scala:137)
> at scala.Option.getOrElse(Option.scala:120)
> at 
> org.apache.spark.broadcast.TorrentBroadcast$$anonfun$org$apache$spark$broadcast$TorrentBroadcast$$readBlocks$1.apply$mcVI$sp(TorrentBroadcast.scala:136)
> at 
> org.apache.spark.broadcast.TorrentBroadcast$$anonfun$org$apache$spark$broadcast$TorrentBroadcast$$readBlocks$1.apply(TorrentBroadcast.scala:119)
> at 
> org.apache.spark.broadcast.TorrentBroadcast$$anonfun$org$apache$spark$broadcast$TorrentBroadcast$$readBlocks$1.apply(TorrentBroadcast.scala:119)
> at scala.collection.immutable.List.foreach(List.scala:318)
> at 
> org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$readBlocks(TorrentBroadcast.scala:119)
> at 
> org.apache.spark.broadcast.To

[jira] [Closed] (SPARK-9587) Spark Web UI not displaying while changing another network

2015-08-04 Thread Kaveen Raajan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-9587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kaveen Raajan closed SPARK-9587.

Resolution: Not A Problem

This was working if I *set SPARK_LOCAL_HOSTNAME={COMPUTERNAME}*. This spark 
drivers and web address are refferring to hostname instead of IP.

> Spark Web UI not displaying while changing another network
> --
>
> Key: SPARK-9587
> URL: https://issues.apache.org/jira/browse/SPARK-9587
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 1.4.1
> Environment: Windows,
> Hadoop-2.5.2,
>Reporter: Kaveen Raajan
>
> I want to start my spark-shell with localhost instead of IP. I'm running 
> spark-shell in yarn-client mode. My Hadoop are running as singlenode cluster 
> connecting with localhost.
> I changed following property in spark-default.conf 
> {panel:title=spark-default.conf}
> spark.driver.hostlocalhost
> spark.driver.hosts   localhost
> {panel}
> Initially while starting spark-shell I'm connecting with some public network 
> (172.16.xxx.yyy) If I disconnect network mean Spark jobs are working without 
> any problem. But Spark web UI are not working.
> ApplicationMaster always connecting with current IP instead of localhost.
> My log are here
> {code}
> 15/08/04 10:17:10 INFO spark.SecurityManager: Changing view acls to: SYSTEM
> 15/08/04 10:17:10 INFO spark.SecurityManager: Changing modify acls to: SYSTEM
> 15/08/04 10:17:10 INFO spark.SecurityManager: SecurityManager: authentication 
> disabled; ui acls disabled; users with view permissions: Set(SYSTEM); users 
> with modify permissions: Set(SYSTEM)
> 15/08/04 10:17:10 INFO spark.HttpServer: Starting HTTP Server
> 15/08/04 10:17:10 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/08/04 10:17:10 INFO server.AbstractConnector: Started 
> SocketConnector@0.0.0.0:58416
> 15/08/04 10:17:10 INFO util.Utils: Successfully started service 'HTTP class 
> server' on port 58416.
> 15/08/04 10:17:15 INFO spark.SparkContext: Running Spark version 1.4.0
> 15/08/04 10:17:15 INFO spark.SecurityManager: Changing view acls to: SYSTEM
> 15/08/04 10:17:15 INFO spark.SecurityManager: Changing modify acls to: SYSTEM
> 15/08/04 10:17:15 INFO spark.SecurityManager: SecurityManager: authentication 
> disabled; ui acls disabled; users with view permissions: Set(SYSTEM); users 
> with modify permissions: Set(SYSTEM)
> Welcome to
>     __
>  / __/__  ___ _/ /__
> _\ \/ _ \/ _ `/ __/  '_/
>/___/ .__/\_,_/_/ /_/\_\   version 1.4.0
>   /_/
> Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_51)
> Type in expressions to have them evaluated.
> Type :help for more information.
> 15/08/04 10:17:15 INFO slf4j.Slf4jLogger: Slf4jLogger started
> 15/08/04 10:17:15 INFO Remoting: Starting remoting
> 15/08/04 10:17:16 INFO Remoting: Remoting started; listening on addresses 
> :[akka.tcp://sparkDriver@localhost:58439]
> 15/08/04 10:17:16 INFO util.Utils: Successfully started service 'sparkDriver' 
> on port 58439.
> 15/08/04 10:17:16 INFO spark.SparkEnv: Registering MapOutputTracker
> 15/08/04 10:17:16 INFO spark.SparkEnv: Registering BlockManagerMaster
> 15/08/04 10:17:16 INFO storage.DiskBlockManager: Created local directory at 
> C:\Windows\Temp\spark-86221988-7e8b-4340-be80-a2be283845e3\blockmgr-2c1b95de-936b-44f3-b98d-263c45e310ca
> 15/08/04 10:17:16 INFO storage.MemoryStore: MemoryStore started with capacity 
> 265.4 MB
> 15/08/04 10:17:16 INFO spark.HttpFileServer: HTTP File server directory is 
> C:\Windows\Temp\spark-86221988-7e8b-4340-be80-a2be283845e3\httpd-da7b686d-deb0-446d-af20-42ded6d6d035
> 15/08/04 10:17:16 INFO spark.HttpServer: Starting HTTP Server
> 15/08/04 10:17:16 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/08/04 10:17:16 INFO server.AbstractConnector: Started 
> SocketConnector@0.0.0.0:58440
> 15/08/04 10:17:16 INFO util.Utils: Successfully started service 'HTTP file 
> server' on port 58440.
> 15/08/04 10:17:16 INFO spark.SparkEnv: Registering OutputCommitCoordinator
> 15/08/04 10:17:16 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/08/04 10:17:16 INFO server.AbstractConnector: Started 
> SelectChannelConnector@0.0.0.0:4040
> 15/08/04 10:17:16 INFO util.Utils: Successfully started service 'SparkUI' on 
> port 4040.
> 15/08/04 10:17:16 INFO ui.SparkUI: Started SparkUI at 
> http://172.16.123.123:4040
> 15/08/04 10:17:16 INFO client.RMProxy: Connecting to ResourceManager at 
> /0.0.0.0:8032
> 15/08/04 10:17:17 INFO yarn.Client: Requesting a new application from cluster 
> with 1 NodeManagers
> 15/08/04 10:17:17 INFO yarn.Client: Verifying our application has not 
> requested more than the maximum memory capability of the cluster (2048 MB per 
> container)
> 15/08/04 10:17:17 INFO yarn.Client: Will allocate AM container, with 

[jira] [Comment Edited] (SPARK-9587) Spark Web UI not displaying while changing another network

2015-08-04 Thread Kaveen Raajan (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-9587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14653178#comment-14653178
 ] 

Kaveen Raajan edited comment on SPARK-9587 at 8/4/15 7:08 AM:
--

Hi [~srowen]
Thanks for responce,
I found *SPARK_PUBLIC_DNS* in spark configuration to overwrite DNS, but 
document is very simple, I don't know what it do?


was (Author: kaveenbigdata):
I found *SPARK_PUBLIC_DNS* in spark configuration to overwrite DNS, but 
document is very simple, I don't know what it do?

> Spark Web UI not displaying while changing another network
> --
>
> Key: SPARK-9587
> URL: https://issues.apache.org/jira/browse/SPARK-9587
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 1.4.1
> Environment: Windows,
> Hadoop-2.5.2,
>Reporter: Kaveen Raajan
>
> I want to start my spark-shell with localhost instead of IP. I'm running 
> spark-shell in yarn-client mode. My Hadoop are running as singlenode cluster 
> connecting with localhost.
> I changed following property in spark-default.conf 
> {panel:title=spark-default.conf}
> spark.driver.hostlocalhost
> spark.driver.hosts   localhost
> {panel}
> Initially while starting spark-shell I'm connecting with some public network 
> (172.16.xxx.yyy) If I disconnect network mean Spark jobs are working without 
> any problem. But Spark web UI are not working.
> ApplicationMaster always connecting with current IP instead of localhost.
> My log are here
> {code}
> 15/08/04 10:17:10 INFO spark.SecurityManager: Changing view acls to: SYSTEM
> 15/08/04 10:17:10 INFO spark.SecurityManager: Changing modify acls to: SYSTEM
> 15/08/04 10:17:10 INFO spark.SecurityManager: SecurityManager: authentication 
> disabled; ui acls disabled; users with view permissions: Set(SYSTEM); users 
> with modify permissions: Set(SYSTEM)
> 15/08/04 10:17:10 INFO spark.HttpServer: Starting HTTP Server
> 15/08/04 10:17:10 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/08/04 10:17:10 INFO server.AbstractConnector: Started 
> SocketConnector@0.0.0.0:58416
> 15/08/04 10:17:10 INFO util.Utils: Successfully started service 'HTTP class 
> server' on port 58416.
> 15/08/04 10:17:15 INFO spark.SparkContext: Running Spark version 1.4.0
> 15/08/04 10:17:15 INFO spark.SecurityManager: Changing view acls to: SYSTEM
> 15/08/04 10:17:15 INFO spark.SecurityManager: Changing modify acls to: SYSTEM
> 15/08/04 10:17:15 INFO spark.SecurityManager: SecurityManager: authentication 
> disabled; ui acls disabled; users with view permissions: Set(SYSTEM); users 
> with modify permissions: Set(SYSTEM)
> Welcome to
>     __
>  / __/__  ___ _/ /__
> _\ \/ _ \/ _ `/ __/  '_/
>/___/ .__/\_,_/_/ /_/\_\   version 1.4.0
>   /_/
> Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_51)
> Type in expressions to have them evaluated.
> Type :help for more information.
> 15/08/04 10:17:15 INFO slf4j.Slf4jLogger: Slf4jLogger started
> 15/08/04 10:17:15 INFO Remoting: Starting remoting
> 15/08/04 10:17:16 INFO Remoting: Remoting started; listening on addresses 
> :[akka.tcp://sparkDriver@localhost:58439]
> 15/08/04 10:17:16 INFO util.Utils: Successfully started service 'sparkDriver' 
> on port 58439.
> 15/08/04 10:17:16 INFO spark.SparkEnv: Registering MapOutputTracker
> 15/08/04 10:17:16 INFO spark.SparkEnv: Registering BlockManagerMaster
> 15/08/04 10:17:16 INFO storage.DiskBlockManager: Created local directory at 
> C:\Windows\Temp\spark-86221988-7e8b-4340-be80-a2be283845e3\blockmgr-2c1b95de-936b-44f3-b98d-263c45e310ca
> 15/08/04 10:17:16 INFO storage.MemoryStore: MemoryStore started with capacity 
> 265.4 MB
> 15/08/04 10:17:16 INFO spark.HttpFileServer: HTTP File server directory is 
> C:\Windows\Temp\spark-86221988-7e8b-4340-be80-a2be283845e3\httpd-da7b686d-deb0-446d-af20-42ded6d6d035
> 15/08/04 10:17:16 INFO spark.HttpServer: Starting HTTP Server
> 15/08/04 10:17:16 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/08/04 10:17:16 INFO server.AbstractConnector: Started 
> SocketConnector@0.0.0.0:58440
> 15/08/04 10:17:16 INFO util.Utils: Successfully started service 'HTTP file 
> server' on port 58440.
> 15/08/04 10:17:16 INFO spark.SparkEnv: Registering OutputCommitCoordinator
> 15/08/04 10:17:16 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/08/04 10:17:16 INFO server.AbstractConnector: Started 
> SelectChannelConnector@0.0.0.0:4040
> 15/08/04 10:17:16 INFO util.Utils: Successfully started service 'SparkUI' on 
> port 4040.
> 15/08/04 10:17:16 INFO ui.SparkUI: Started SparkUI at 
> http://172.16.123.123:4040
> 15/08/04 10:17:16 INFO client.RMProxy: Connecting to ResourceManager at 
> /0.0.0.0:8032
> 15/08/04 10:17:17 INFO yarn.Client: Requesting a new application from cluster 
> with 1 NodeMan

[jira] [Comment Edited] (SPARK-9587) Spark Web UI not displaying while changing another network

2015-08-04 Thread Kaveen Raajan (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-9587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14653178#comment-14653178
 ] 

Kaveen Raajan edited comment on SPARK-9587 at 8/4/15 7:08 AM:
--

Hi [~srowen]
Thanks for response,
I found *SPARK_PUBLIC_DNS* in spark configuration to overwrite DNS, but 
document is very simple, I don't know what it do?


was (Author: kaveenbigdata):
Hi [~srowen]
Thanks for responce,
I found *SPARK_PUBLIC_DNS* in spark configuration to overwrite DNS, but 
document is very simple, I don't know what it do?

> Spark Web UI not displaying while changing another network
> --
>
> Key: SPARK-9587
> URL: https://issues.apache.org/jira/browse/SPARK-9587
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 1.4.1
> Environment: Windows,
> Hadoop-2.5.2,
>Reporter: Kaveen Raajan
>
> I want to start my spark-shell with localhost instead of IP. I'm running 
> spark-shell in yarn-client mode. My Hadoop are running as singlenode cluster 
> connecting with localhost.
> I changed following property in spark-default.conf 
> {panel:title=spark-default.conf}
> spark.driver.hostlocalhost
> spark.driver.hosts   localhost
> {panel}
> Initially while starting spark-shell I'm connecting with some public network 
> (172.16.xxx.yyy) If I disconnect network mean Spark jobs are working without 
> any problem. But Spark web UI are not working.
> ApplicationMaster always connecting with current IP instead of localhost.
> My log are here
> {code}
> 15/08/04 10:17:10 INFO spark.SecurityManager: Changing view acls to: SYSTEM
> 15/08/04 10:17:10 INFO spark.SecurityManager: Changing modify acls to: SYSTEM
> 15/08/04 10:17:10 INFO spark.SecurityManager: SecurityManager: authentication 
> disabled; ui acls disabled; users with view permissions: Set(SYSTEM); users 
> with modify permissions: Set(SYSTEM)
> 15/08/04 10:17:10 INFO spark.HttpServer: Starting HTTP Server
> 15/08/04 10:17:10 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/08/04 10:17:10 INFO server.AbstractConnector: Started 
> SocketConnector@0.0.0.0:58416
> 15/08/04 10:17:10 INFO util.Utils: Successfully started service 'HTTP class 
> server' on port 58416.
> 15/08/04 10:17:15 INFO spark.SparkContext: Running Spark version 1.4.0
> 15/08/04 10:17:15 INFO spark.SecurityManager: Changing view acls to: SYSTEM
> 15/08/04 10:17:15 INFO spark.SecurityManager: Changing modify acls to: SYSTEM
> 15/08/04 10:17:15 INFO spark.SecurityManager: SecurityManager: authentication 
> disabled; ui acls disabled; users with view permissions: Set(SYSTEM); users 
> with modify permissions: Set(SYSTEM)
> Welcome to
>     __
>  / __/__  ___ _/ /__
> _\ \/ _ \/ _ `/ __/  '_/
>/___/ .__/\_,_/_/ /_/\_\   version 1.4.0
>   /_/
> Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_51)
> Type in expressions to have them evaluated.
> Type :help for more information.
> 15/08/04 10:17:15 INFO slf4j.Slf4jLogger: Slf4jLogger started
> 15/08/04 10:17:15 INFO Remoting: Starting remoting
> 15/08/04 10:17:16 INFO Remoting: Remoting started; listening on addresses 
> :[akka.tcp://sparkDriver@localhost:58439]
> 15/08/04 10:17:16 INFO util.Utils: Successfully started service 'sparkDriver' 
> on port 58439.
> 15/08/04 10:17:16 INFO spark.SparkEnv: Registering MapOutputTracker
> 15/08/04 10:17:16 INFO spark.SparkEnv: Registering BlockManagerMaster
> 15/08/04 10:17:16 INFO storage.DiskBlockManager: Created local directory at 
> C:\Windows\Temp\spark-86221988-7e8b-4340-be80-a2be283845e3\blockmgr-2c1b95de-936b-44f3-b98d-263c45e310ca
> 15/08/04 10:17:16 INFO storage.MemoryStore: MemoryStore started with capacity 
> 265.4 MB
> 15/08/04 10:17:16 INFO spark.HttpFileServer: HTTP File server directory is 
> C:\Windows\Temp\spark-86221988-7e8b-4340-be80-a2be283845e3\httpd-da7b686d-deb0-446d-af20-42ded6d6d035
> 15/08/04 10:17:16 INFO spark.HttpServer: Starting HTTP Server
> 15/08/04 10:17:16 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/08/04 10:17:16 INFO server.AbstractConnector: Started 
> SocketConnector@0.0.0.0:58440
> 15/08/04 10:17:16 INFO util.Utils: Successfully started service 'HTTP file 
> server' on port 58440.
> 15/08/04 10:17:16 INFO spark.SparkEnv: Registering OutputCommitCoordinator
> 15/08/04 10:17:16 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/08/04 10:17:16 INFO server.AbstractConnector: Started 
> SelectChannelConnector@0.0.0.0:4040
> 15/08/04 10:17:16 INFO util.Utils: Successfully started service 'SparkUI' on 
> port 4040.
> 15/08/04 10:17:16 INFO ui.SparkUI: Started SparkUI at 
> http://172.16.123.123:4040
> 15/08/04 10:17:16 INFO client.RMProxy: Connecting to ResourceManager at 
> /0.0.0.0:8032
> 15/08/04 10:17:17 INFO yarn.Client: Requesting a new applicat

[jira] [Commented] (SPARK-9587) Spark Web UI not displaying while changing another network

2015-08-04 Thread Kaveen Raajan (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-9587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14653178#comment-14653178
 ] 

Kaveen Raajan commented on SPARK-9587:
--

I found *SPARK_PUBLIC_DNS* in spark configuration to overwrite DNS, but 
document is very simple, I don't know what it do?

> Spark Web UI not displaying while changing another network
> --
>
> Key: SPARK-9587
> URL: https://issues.apache.org/jira/browse/SPARK-9587
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 1.4.1
> Environment: Windows,
> Hadoop-2.5.2,
>Reporter: Kaveen Raajan
>
> I want to start my spark-shell with localhost instead of IP. I'm running 
> spark-shell in yarn-client mode. My Hadoop are running as singlenode cluster 
> connecting with localhost.
> I changed following property in spark-default.conf 
> {panel:title=spark-default.conf}
> spark.driver.hostlocalhost
> spark.driver.hosts   localhost
> {panel}
> Initially while starting spark-shell I'm connecting with some public network 
> (172.16.xxx.yyy) If I disconnect network mean Spark jobs are working without 
> any problem. But Spark web UI are not working.
> ApplicationMaster always connecting with current IP instead of localhost.
> My log are here
> {code}
> 15/08/04 10:17:10 INFO spark.SecurityManager: Changing view acls to: SYSTEM
> 15/08/04 10:17:10 INFO spark.SecurityManager: Changing modify acls to: SYSTEM
> 15/08/04 10:17:10 INFO spark.SecurityManager: SecurityManager: authentication 
> disabled; ui acls disabled; users with view permissions: Set(SYSTEM); users 
> with modify permissions: Set(SYSTEM)
> 15/08/04 10:17:10 INFO spark.HttpServer: Starting HTTP Server
> 15/08/04 10:17:10 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/08/04 10:17:10 INFO server.AbstractConnector: Started 
> SocketConnector@0.0.0.0:58416
> 15/08/04 10:17:10 INFO util.Utils: Successfully started service 'HTTP class 
> server' on port 58416.
> 15/08/04 10:17:15 INFO spark.SparkContext: Running Spark version 1.4.0
> 15/08/04 10:17:15 INFO spark.SecurityManager: Changing view acls to: SYSTEM
> 15/08/04 10:17:15 INFO spark.SecurityManager: Changing modify acls to: SYSTEM
> 15/08/04 10:17:15 INFO spark.SecurityManager: SecurityManager: authentication 
> disabled; ui acls disabled; users with view permissions: Set(SYSTEM); users 
> with modify permissions: Set(SYSTEM)
> Welcome to
>     __
>  / __/__  ___ _/ /__
> _\ \/ _ \/ _ `/ __/  '_/
>/___/ .__/\_,_/_/ /_/\_\   version 1.4.0
>   /_/
> Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_51)
> Type in expressions to have them evaluated.
> Type :help for more information.
> 15/08/04 10:17:15 INFO slf4j.Slf4jLogger: Slf4jLogger started
> 15/08/04 10:17:15 INFO Remoting: Starting remoting
> 15/08/04 10:17:16 INFO Remoting: Remoting started; listening on addresses 
> :[akka.tcp://sparkDriver@localhost:58439]
> 15/08/04 10:17:16 INFO util.Utils: Successfully started service 'sparkDriver' 
> on port 58439.
> 15/08/04 10:17:16 INFO spark.SparkEnv: Registering MapOutputTracker
> 15/08/04 10:17:16 INFO spark.SparkEnv: Registering BlockManagerMaster
> 15/08/04 10:17:16 INFO storage.DiskBlockManager: Created local directory at 
> C:\Windows\Temp\spark-86221988-7e8b-4340-be80-a2be283845e3\blockmgr-2c1b95de-936b-44f3-b98d-263c45e310ca
> 15/08/04 10:17:16 INFO storage.MemoryStore: MemoryStore started with capacity 
> 265.4 MB
> 15/08/04 10:17:16 INFO spark.HttpFileServer: HTTP File server directory is 
> C:\Windows\Temp\spark-86221988-7e8b-4340-be80-a2be283845e3\httpd-da7b686d-deb0-446d-af20-42ded6d6d035
> 15/08/04 10:17:16 INFO spark.HttpServer: Starting HTTP Server
> 15/08/04 10:17:16 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/08/04 10:17:16 INFO server.AbstractConnector: Started 
> SocketConnector@0.0.0.0:58440
> 15/08/04 10:17:16 INFO util.Utils: Successfully started service 'HTTP file 
> server' on port 58440.
> 15/08/04 10:17:16 INFO spark.SparkEnv: Registering OutputCommitCoordinator
> 15/08/04 10:17:16 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/08/04 10:17:16 INFO server.AbstractConnector: Started 
> SelectChannelConnector@0.0.0.0:4040
> 15/08/04 10:17:16 INFO util.Utils: Successfully started service 'SparkUI' on 
> port 4040.
> 15/08/04 10:17:16 INFO ui.SparkUI: Started SparkUI at 
> http://172.16.123.123:4040
> 15/08/04 10:17:16 INFO client.RMProxy: Connecting to ResourceManager at 
> /0.0.0.0:8032
> 15/08/04 10:17:17 INFO yarn.Client: Requesting a new application from cluster 
> with 1 NodeManagers
> 15/08/04 10:17:17 INFO yarn.Client: Verifying our application has not 
> requested more than the maximum memory capability of the cluster (2048 MB per 
> container)
> 15/08/04 10:17:17 INFO yarn.Client: Will allocate AM contain

[jira] [Comment Edited] (SPARK-9587) Spark Web UI not displaying while changing another network

2015-08-03 Thread Kaveen Raajan (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-9587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14653068#comment-14653068
 ] 

Kaveen Raajan edited comment on SPARK-9587 at 8/4/15 5:31 AM:
--

Even I tried to *set SPARK_LOCAL_IP=localhost*, by using this property. Spark 
Web UI is not displaying at any time.

If I connecting hadoop proxyserver mean same problem raise. 
Is there any alternative solution to connect all my spark driver, launcher, 
executors with localhost instead of IP?

or else how to change 'spark.driver.appUIAddress'? Even I changed this property 
at spark-default.conf but not reflecting. 


was (Author: kaveenbigdata):
Even I tried to *set SPARK_LOCAL_IP=localhost*, by using this property. Spark 
Web UI is not displaying at any time.

If I connecting hadoop proxyserver mean same problem raise. 
Is there any alternative solution to connect all my spark driver, launcher, 
executors with localhost instead of IP?

> Spark Web UI not displaying while changing another network
> --
>
> Key: SPARK-9587
> URL: https://issues.apache.org/jira/browse/SPARK-9587
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 1.4.1
> Environment: Windows,
> Hadoop-2.5.2,
>Reporter: Kaveen Raajan
>
> I want to start my spark-shell with localhost instead of IP. I'm running 
> spark-shell in yarn-client mode. My Hadoop are running as singlenode cluster 
> connecting with localhost.
> I changed following property in spark-default.conf 
> {panel:title=spark-default.conf}
> spark.driver.hostlocalhost
> spark.driver.hosts   localhost
> {panel}
> Initially while starting spark-shell I'm connecting with some public network 
> (172.16.xxx.yyy) If I disconnect network mean Spark jobs are working without 
> any problem. But Spark web UI are not working.
> ApplicationMaster always connecting with current IP instead of localhost.
> My log are here
> {code}
> 15/08/04 10:17:10 INFO spark.SecurityManager: Changing view acls to: SYSTEM
> 15/08/04 10:17:10 INFO spark.SecurityManager: Changing modify acls to: SYSTEM
> 15/08/04 10:17:10 INFO spark.SecurityManager: SecurityManager: authentication 
> disabled; ui acls disabled; users with view permissions: Set(SYSTEM); users 
> with modify permissions: Set(SYSTEM)
> 15/08/04 10:17:10 INFO spark.HttpServer: Starting HTTP Server
> 15/08/04 10:17:10 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/08/04 10:17:10 INFO server.AbstractConnector: Started 
> SocketConnector@0.0.0.0:58416
> 15/08/04 10:17:10 INFO util.Utils: Successfully started service 'HTTP class 
> server' on port 58416.
> 15/08/04 10:17:15 INFO spark.SparkContext: Running Spark version 1.4.0
> 15/08/04 10:17:15 INFO spark.SecurityManager: Changing view acls to: SYSTEM
> 15/08/04 10:17:15 INFO spark.SecurityManager: Changing modify acls to: SYSTEM
> 15/08/04 10:17:15 INFO spark.SecurityManager: SecurityManager: authentication 
> disabled; ui acls disabled; users with view permissions: Set(SYSTEM); users 
> with modify permissions: Set(SYSTEM)
> Welcome to
>     __
>  / __/__  ___ _/ /__
> _\ \/ _ \/ _ `/ __/  '_/
>/___/ .__/\_,_/_/ /_/\_\   version 1.4.0
>   /_/
> Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_51)
> Type in expressions to have them evaluated.
> Type :help for more information.
> 15/08/04 10:17:15 INFO slf4j.Slf4jLogger: Slf4jLogger started
> 15/08/04 10:17:15 INFO Remoting: Starting remoting
> 15/08/04 10:17:16 INFO Remoting: Remoting started; listening on addresses 
> :[akka.tcp://sparkDriver@localhost:58439]
> 15/08/04 10:17:16 INFO util.Utils: Successfully started service 'sparkDriver' 
> on port 58439.
> 15/08/04 10:17:16 INFO spark.SparkEnv: Registering MapOutputTracker
> 15/08/04 10:17:16 INFO spark.SparkEnv: Registering BlockManagerMaster
> 15/08/04 10:17:16 INFO storage.DiskBlockManager: Created local directory at 
> C:\Windows\Temp\spark-86221988-7e8b-4340-be80-a2be283845e3\blockmgr-2c1b95de-936b-44f3-b98d-263c45e310ca
> 15/08/04 10:17:16 INFO storage.MemoryStore: MemoryStore started with capacity 
> 265.4 MB
> 15/08/04 10:17:16 INFO spark.HttpFileServer: HTTP File server directory is 
> C:\Windows\Temp\spark-86221988-7e8b-4340-be80-a2be283845e3\httpd-da7b686d-deb0-446d-af20-42ded6d6d035
> 15/08/04 10:17:16 INFO spark.HttpServer: Starting HTTP Server
> 15/08/04 10:17:16 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/08/04 10:17:16 INFO server.AbstractConnector: Started 
> SocketConnector@0.0.0.0:58440
> 15/08/04 10:17:16 INFO util.Utils: Successfully started service 'HTTP file 
> server' on port 58440.
> 15/08/04 10:17:16 INFO spark.SparkEnv: Registering OutputCommitCoordinator
> 15/08/04 10:17:16 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/08/04 10:17:16 INFO ser

[jira] [Commented] (SPARK-9587) Spark Web UI not displaying while changing another network

2015-08-03 Thread Kaveen Raajan (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-9587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14653068#comment-14653068
 ] 

Kaveen Raajan commented on SPARK-9587:
--

Even I tried to *set SPARK_LOCAL_IP=localhost*, by using this property. Spark 
Web UI is not displaying at any time.

If I connecting hadoop proxyserver mean same problem raise. 
Is there any alternative solution to connect all my spark driver, launcher, 
executors with localhost instead of IP?

> Spark Web UI not displaying while changing another network
> --
>
> Key: SPARK-9587
> URL: https://issues.apache.org/jira/browse/SPARK-9587
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 1.4.1
> Environment: Windows,
> Hadoop-2.5.2,
>Reporter: Kaveen Raajan
>
> I want to start my spark-shell with localhost instead of IP. I'm running 
> spark-shell in yarn-client mode. My Hadoop are running as singlenode cluster 
> connecting with localhost.
> I changed following property in spark-default.conf 
> {panel:title=spark-default.conf}
> spark.driver.hostlocalhost
> spark.driver.hosts   localhost
> {panel}
> Initially while starting spark-shell I'm connecting with some public network 
> (172.16.xxx.yyy) If I disconnect network mean Spark jobs are working without 
> any problem. But Spark web UI are not working.
> ApplicationMaster always connecting with current IP instead of localhost.
> My log are here
> {code}
> 15/08/04 10:17:10 INFO spark.SecurityManager: Changing view acls to: SYSTEM
> 15/08/04 10:17:10 INFO spark.SecurityManager: Changing modify acls to: SYSTEM
> 15/08/04 10:17:10 INFO spark.SecurityManager: SecurityManager: authentication 
> disabled; ui acls disabled; users with view permissions: Set(SYSTEM); users 
> with modify permissions: Set(SYSTEM)
> 15/08/04 10:17:10 INFO spark.HttpServer: Starting HTTP Server
> 15/08/04 10:17:10 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/08/04 10:17:10 INFO server.AbstractConnector: Started 
> SocketConnector@0.0.0.0:58416
> 15/08/04 10:17:10 INFO util.Utils: Successfully started service 'HTTP class 
> server' on port 58416.
> 15/08/04 10:17:15 INFO spark.SparkContext: Running Spark version 1.4.0
> 15/08/04 10:17:15 INFO spark.SecurityManager: Changing view acls to: SYSTEM
> 15/08/04 10:17:15 INFO spark.SecurityManager: Changing modify acls to: SYSTEM
> 15/08/04 10:17:15 INFO spark.SecurityManager: SecurityManager: authentication 
> disabled; ui acls disabled; users with view permissions: Set(SYSTEM); users 
> with modify permissions: Set(SYSTEM)
> Welcome to
>     __
>  / __/__  ___ _/ /__
> _\ \/ _ \/ _ `/ __/  '_/
>/___/ .__/\_,_/_/ /_/\_\   version 1.4.0
>   /_/
> Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_51)
> Type in expressions to have them evaluated.
> Type :help for more information.
> 15/08/04 10:17:15 INFO slf4j.Slf4jLogger: Slf4jLogger started
> 15/08/04 10:17:15 INFO Remoting: Starting remoting
> 15/08/04 10:17:16 INFO Remoting: Remoting started; listening on addresses 
> :[akka.tcp://sparkDriver@localhost:58439]
> 15/08/04 10:17:16 INFO util.Utils: Successfully started service 'sparkDriver' 
> on port 58439.
> 15/08/04 10:17:16 INFO spark.SparkEnv: Registering MapOutputTracker
> 15/08/04 10:17:16 INFO spark.SparkEnv: Registering BlockManagerMaster
> 15/08/04 10:17:16 INFO storage.DiskBlockManager: Created local directory at 
> C:\Windows\Temp\spark-86221988-7e8b-4340-be80-a2be283845e3\blockmgr-2c1b95de-936b-44f3-b98d-263c45e310ca
> 15/08/04 10:17:16 INFO storage.MemoryStore: MemoryStore started with capacity 
> 265.4 MB
> 15/08/04 10:17:16 INFO spark.HttpFileServer: HTTP File server directory is 
> C:\Windows\Temp\spark-86221988-7e8b-4340-be80-a2be283845e3\httpd-da7b686d-deb0-446d-af20-42ded6d6d035
> 15/08/04 10:17:16 INFO spark.HttpServer: Starting HTTP Server
> 15/08/04 10:17:16 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/08/04 10:17:16 INFO server.AbstractConnector: Started 
> SocketConnector@0.0.0.0:58440
> 15/08/04 10:17:16 INFO util.Utils: Successfully started service 'HTTP file 
> server' on port 58440.
> 15/08/04 10:17:16 INFO spark.SparkEnv: Registering OutputCommitCoordinator
> 15/08/04 10:17:16 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/08/04 10:17:16 INFO server.AbstractConnector: Started 
> SelectChannelConnector@0.0.0.0:4040
> 15/08/04 10:17:16 INFO util.Utils: Successfully started service 'SparkUI' on 
> port 4040.
> 15/08/04 10:17:16 INFO ui.SparkUI: Started SparkUI at 
> http://172.16.123.123:4040
> 15/08/04 10:17:16 INFO client.RMProxy: Connecting to ResourceManager at 
> /0.0.0.0:8032
> 15/08/04 10:17:17 INFO yarn.Client: Requesting a new application from cluster 
> with 1 NodeManagers
> 15/08/04 10:17:17 INFO yarn.Client: Verifying our appl

[jira] [Created] (SPARK-9587) Spark Web UI not displaying while changing another network

2015-08-03 Thread Kaveen Raajan (JIRA)
Kaveen Raajan created SPARK-9587:


 Summary: Spark Web UI not displaying while changing another network
 Key: SPARK-9587
 URL: https://issues.apache.org/jira/browse/SPARK-9587
 Project: Spark
  Issue Type: Bug
  Components: Web UI
Affects Versions: 1.4.1
 Environment: Windows,
Hadoop-2.5.2,
Reporter: Kaveen Raajan


I want to start my spark-shell with localhost instead of IP. I'm running 
spark-shell in yarn-client mode. My Hadoop are running as singlenode cluster 
connecting with localhost.

I changed following property in spark-default.conf 
{panel:title=spark-default.conf}
spark.driver.hostlocalhost
spark.driver.hosts   localhost
{panel}

Initially while starting spark-shell I'm connecting with some public network 
(172.16.xxx.yyy) If I disconnect network mean Spark jobs are working without 
any problem. But Spark web UI are not working.

ApplicationMaster always connecting with current IP instead of localhost.
My log are here
{code}
15/08/04 10:17:10 INFO spark.SecurityManager: Changing view acls to: SYSTEM
15/08/04 10:17:10 INFO spark.SecurityManager: Changing modify acls to: SYSTEM
15/08/04 10:17:10 INFO spark.SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users with view permissions: Set(SYSTEM); users 
with modify permissions: Set(SYSTEM)
15/08/04 10:17:10 INFO spark.HttpServer: Starting HTTP Server
15/08/04 10:17:10 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/08/04 10:17:10 INFO server.AbstractConnector: Started 
SocketConnector@0.0.0.0:58416
15/08/04 10:17:10 INFO util.Utils: Successfully started service 'HTTP class 
server' on port 58416.
15/08/04 10:17:15 INFO spark.SparkContext: Running Spark version 1.4.0
15/08/04 10:17:15 INFO spark.SecurityManager: Changing view acls to: SYSTEM
15/08/04 10:17:15 INFO spark.SecurityManager: Changing modify acls to: SYSTEM
15/08/04 10:17:15 INFO spark.SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users with view permissions: Set(SYSTEM); users 
with modify permissions: Set(SYSTEM)
Welcome to
    __
 / __/__  ___ _/ /__
_\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.4.0
  /_/

Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_51)
Type in expressions to have them evaluated.
Type :help for more information.
15/08/04 10:17:15 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/08/04 10:17:15 INFO Remoting: Starting remoting
15/08/04 10:17:16 INFO Remoting: Remoting started; listening on addresses 
:[akka.tcp://sparkDriver@localhost:58439]
15/08/04 10:17:16 INFO util.Utils: Successfully started service 'sparkDriver' 
on port 58439.
15/08/04 10:17:16 INFO spark.SparkEnv: Registering MapOutputTracker
15/08/04 10:17:16 INFO spark.SparkEnv: Registering BlockManagerMaster
15/08/04 10:17:16 INFO storage.DiskBlockManager: Created local directory at 
C:\Windows\Temp\spark-86221988-7e8b-4340-be80-a2be283845e3\blockmgr-2c1b95de-936b-44f3-b98d-263c45e310ca
15/08/04 10:17:16 INFO storage.MemoryStore: MemoryStore started with capacity 
265.4 MB
15/08/04 10:17:16 INFO spark.HttpFileServer: HTTP File server directory is 
C:\Windows\Temp\spark-86221988-7e8b-4340-be80-a2be283845e3\httpd-da7b686d-deb0-446d-af20-42ded6d6d035
15/08/04 10:17:16 INFO spark.HttpServer: Starting HTTP Server
15/08/04 10:17:16 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/08/04 10:17:16 INFO server.AbstractConnector: Started 
SocketConnector@0.0.0.0:58440
15/08/04 10:17:16 INFO util.Utils: Successfully started service 'HTTP file 
server' on port 58440.
15/08/04 10:17:16 INFO spark.SparkEnv: Registering OutputCommitCoordinator
15/08/04 10:17:16 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/08/04 10:17:16 INFO server.AbstractConnector: Started 
SelectChannelConnector@0.0.0.0:4040
15/08/04 10:17:16 INFO util.Utils: Successfully started service 'SparkUI' on 
port 4040.
15/08/04 10:17:16 INFO ui.SparkUI: Started SparkUI at http://172.16.123.123:4040
15/08/04 10:17:16 INFO client.RMProxy: Connecting to ResourceManager at 
/0.0.0.0:8032
15/08/04 10:17:17 INFO yarn.Client: Requesting a new application from cluster 
with 1 NodeManagers
15/08/04 10:17:17 INFO yarn.Client: Verifying our application has not requested 
more than the maximum memory capability of the cluster (2048 MB per container)
15/08/04 10:17:17 INFO yarn.Client: Will allocate AM container, with 896 MB 
memory including 384 MB overhead
15/08/04 10:17:17 INFO yarn.Client: Setting up container launch context for our 
AM
15/08/04 10:17:17 INFO yarn.Client: Preparing resources for our AM container
15/08/04 10:17:17 INFO yarn.Client: Uploading resource 
file:/C://Spark/lib/spark-assembly-1.4.0-hadoop2.5.2.jar -> 
hdfs://localhost:9000/user/SYSTEM/.sparkStaging/application_1438662854479_0001/spark-assembly-1.4.0-hadoop2.5.2.jar
15/08/04 10:17:20 INFO yarn.Client: Uploading resource 
file:/C:/

[jira] [Closed] (SPARK-8155) support of proxy user not working in spaced username on windows

2015-06-22 Thread Kaveen Raajan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kaveen Raajan closed SPARK-8155.

Resolution: Not A Problem

If try to run spark-shell in yarn mode with proxy-user mean.
We have to set proxy host and proxy group for spark running user.
If running spark as *SYSTEM* user mean,
{code}spark-shell --master yarn-client --proxy-user SYSTEM{code}

At *core-site.xml* add following properties and restart yarn services
{code}
   
hadoop.proxyuser.SYSTEM.hosts
*


hadoop.proxyuser.SYSTEM.groups
*

{code}

Now everything working fine for me. This is not an issue.

> support of proxy user not working in spaced username on windows
> ---
>
> Key: SPARK-8155
> URL: https://issues.apache.org/jira/browse/SPARK-8155
> Project: Spark
>  Issue Type: Story
>  Components: Spark Core
>Affects Versions: 1.3.1
> Environment: windows-8/7/server 2008
> Hadoop-2.5.2
> Java-1.7.51
> username - "kaveen raajan"
>Reporter: Kaveen Raajan
>
> I'm using SPARK1.3.1 on windows machine contain space on username (current 
> username-"kaveen raajan"). I tried to run following command
>  {code}spark-shell --master yarn-client --proxy-user SYSTEM {code}
> I able to run successfully on non-space user application also running in 
> SYSTEM user, But When I try to run in spaced user (kaveen raajan) mean it 
> throws following error.
> {code}
> 15/06/05 16:52:48 INFO spark.SecurityManager: Changing view acls to: SYSTEM
> 15/06/05 16:52:48 INFO spark.SecurityManager: Changing modify acls to: SYSTEM
> 15/06/05 16:52:48 INFO spark.SecurityManager: SecurityManager: authentication 
> di
> sabled; ui acls disabled; users with view permissions: Set(SYSTEM); users 
> with m
> odify permissions: Set(SYSTEM)
> 15/06/05 16:52:49 INFO slf4j.Slf4jLogger: Slf4jLogger started
> 15/06/05 16:52:49 INFO Remoting: Starting remoting
> 15/06/05 16:52:49 INFO Remoting: Remoting started; listening on addresses 
> :[akka
> .tcp://sparkDriver@Master:52137]
> 15/06/05 16:52:49 INFO util.Utils: Successfully started service 'sparkDriver' 
> on
>  port 52137.
> 15/06/05 16:52:49 INFO spark.SparkEnv: Registering MapOutputTracker
> 15/06/05 16:52:49 INFO spark.SparkEnv: Registering BlockManagerMaster
> 15/06/05 16:52:49 INFO storage.DiskBlockManager: Created local directory at 
> C:\U
> sers\KAVEEN~1\AppData\Local\Temp\spark-d5b43891-274c-457d-aa3a-d79a536fd536\bloc
> kmgr-e980101b-4f93-455a-8a05-9185dcab9f8e
> 15/06/05 16:52:49 INFO storage.MemoryStore: MemoryStore started with capacity 
> 26
> 5.4 MB
> 15/06/05 16:52:49 INFO spark.HttpFileServer: HTTP File server directory is 
> C:\Us
> ers\KAVEEN~1\AppData\Local\Temp\spark-a35e3f17-641c-4ae3-90f2-51eac901b799\httpd
> -ecea93ad-c285-4c62-9222-01a9d6ff24e4
> 15/06/05 16:52:49 INFO spark.HttpServer: Starting HTTP Server
> 15/06/05 16:52:49 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/06/05 16:52:49 INFO server.AbstractConnector: Started 
> SocketConnector@0.0.0.0
> :52138
> 15/06/05 16:52:49 INFO util.Utils: Successfully started service 'HTTP file 
> serve
> r' on port 52138.
> 15/06/05 16:52:49 INFO spark.SparkEnv: Registering OutputCommitCoordinator
> 15/06/05 16:52:49 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/06/05 16:52:49 INFO server.AbstractConnector: Started 
> SelectChannelConnector@
> 0.0.0.0:4040
> 15/06/05 16:52:49 INFO util.Utils: Successfully started service 'SparkUI' on 
> por
> t 4040.
> 15/06/05 16:52:49 INFO ui.SparkUI: Started SparkUI at http://Master:4040
> 15/06/05 16:52:49 INFO client.RMProxy: Connecting to ResourceManager at 
> /0.0.0.0
> :8032
> java.lang.NullPointerException
> at org.apache.spark.sql.SQLContext.(SQLContext.scala:145)
> at org.apache.spark.sql.hive.HiveContext.(HiveContext.scala:49)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct
> orAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC
> onstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:10
> 27)
> at $iwC$$iwC.(:9)
> at $iwC.(:18)
> at (:20)
> at .(:24)
> at .()
> at .(:7)
> at .()
> at $print()
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
> java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
> sorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apa

[jira] [Updated] (SPARK-8155) support of proxy user not working in spaced username on windows

2015-06-08 Thread Kaveen Raajan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kaveen Raajan updated SPARK-8155:
-
Description: 
I'm using SPARK1.3.1 on windows machine contain space on username (current 
username-"kaveen raajan"). I tried to run following command
 {code}spark-shell --master yarn-client --proxy-user SYSTEM {code}
I able to run successfully on non-space user application also running in SYSTEM 
user, But When I try to run in spaced user (kaveen raajan) mean it throws 
following error.

{code}
15/06/05 16:52:48 INFO spark.SecurityManager: Changing view acls to: SYSTEM
15/06/05 16:52:48 INFO spark.SecurityManager: Changing modify acls to: SYSTEM
15/06/05 16:52:48 INFO spark.SecurityManager: SecurityManager: authentication di
sabled; ui acls disabled; users with view permissions: Set(SYSTEM); users with m
odify permissions: Set(SYSTEM)
15/06/05 16:52:49 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/06/05 16:52:49 INFO Remoting: Starting remoting
15/06/05 16:52:49 INFO Remoting: Remoting started; listening on addresses :[akka
.tcp://sparkDriver@Master:52137]
15/06/05 16:52:49 INFO util.Utils: Successfully started service 'sparkDriver' on
 port 52137.
15/06/05 16:52:49 INFO spark.SparkEnv: Registering MapOutputTracker
15/06/05 16:52:49 INFO spark.SparkEnv: Registering BlockManagerMaster
15/06/05 16:52:49 INFO storage.DiskBlockManager: Created local directory at C:\U
sers\KAVEEN~1\AppData\Local\Temp\spark-d5b43891-274c-457d-aa3a-d79a536fd536\bloc
kmgr-e980101b-4f93-455a-8a05-9185dcab9f8e
15/06/05 16:52:49 INFO storage.MemoryStore: MemoryStore started with capacity 26
5.4 MB
15/06/05 16:52:49 INFO spark.HttpFileServer: HTTP File server directory is C:\Us
ers\KAVEEN~1\AppData\Local\Temp\spark-a35e3f17-641c-4ae3-90f2-51eac901b799\httpd
-ecea93ad-c285-4c62-9222-01a9d6ff24e4
15/06/05 16:52:49 INFO spark.HttpServer: Starting HTTP Server
15/06/05 16:52:49 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/06/05 16:52:49 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0
:52138
15/06/05 16:52:49 INFO util.Utils: Successfully started service 'HTTP file serve
r' on port 52138.
15/06/05 16:52:49 INFO spark.SparkEnv: Registering OutputCommitCoordinator
15/06/05 16:52:49 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/06/05 16:52:49 INFO server.AbstractConnector: Started SelectChannelConnector@
0.0.0.0:4040
15/06/05 16:52:49 INFO util.Utils: Successfully started service 'SparkUI' on por
t 4040.
15/06/05 16:52:49 INFO ui.SparkUI: Started SparkUI at http://Master:4040
15/06/05 16:52:49 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0
:8032

java.lang.NullPointerException
at org.apache.spark.sql.SQLContext.(SQLContext.scala:145)
at org.apache.spark.sql.hive.HiveContext.(HiveContext.scala:49)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct
orAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC
onstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:10
27)
at $iwC$$iwC.(:9)
at $iwC.(:18)
at (:20)
at .(:24)
at .()
at .(:7)
at .()
at $print()
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:
1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:
1338)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840
)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:8
56)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.sca
la:901)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:813)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply
(SparkILoopInit.scala:130)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply
(SparkILoopInit.scala:122)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoop
Init.scala:122)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)

at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$Spark
ILoop$

[jira] [Updated] (SPARK-8155) support of proxy user not working in spaced username on windows

2015-06-08 Thread Kaveen Raajan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kaveen Raajan updated SPARK-8155:
-
Issue Type: Story  (was: Technical task)
Parent: (was: SPARK-5493)

> support of proxy user not working in spaced username on windows
> ---
>
> Key: SPARK-8155
> URL: https://issues.apache.org/jira/browse/SPARK-8155
> Project: Spark
>  Issue Type: Story
>  Components: Spark Core
>Affects Versions: 1.3.1
> Environment: windows-8/7/server 2008
> Hadoop-2.5.2
> Java-1.7.51
> username - "kaveen raajan"
>Reporter: Kaveen Raajan
>
> I'm using SPARK1.3.1 on windows machine contain space on username (current 
> username-"kaveen raajan"). I tried to run following command
>  {code}spark-shell --master yarn-client --proxy-user SYSTEM {code}
> I able to run successfully on non-space user application also running in 
> SYSTEM user, But When I try to run in spaced user (kaveen raajan) mean it 
> throws following error.
> {code}
> 15/06/05 16:52:48 INFO spark.SecurityManager: Changing view acls to: SYSTEM
> 15/06/05 16:52:48 INFO spark.SecurityManager: Changing modify acls to: SYSTEM
> 15/06/05 16:52:48 INFO spark.SecurityManager: SecurityManager: authentication 
> di
> sabled; ui acls disabled; users with view permissions: Set(SYSTEM); users 
> with m
> odify permissions: Set(SYSTEM)
> 15/06/05 16:52:49 INFO slf4j.Slf4jLogger: Slf4jLogger started
> 15/06/05 16:52:49 INFO Remoting: Starting remoting
> 15/06/05 16:52:49 INFO Remoting: Remoting started; listening on addresses 
> :[akka
> .tcp://sparkDriver@Master:52137]
> 15/06/05 16:52:49 INFO util.Utils: Successfully started service 'sparkDriver' 
> on
>  port 52137.
> 15/06/05 16:52:49 INFO spark.SparkEnv: Registering MapOutputTracker
> 15/06/05 16:52:49 INFO spark.SparkEnv: Registering BlockManagerMaster
> 15/06/05 16:52:49 INFO storage.DiskBlockManager: Created local directory at 
> C:\U
> sers\KAVEEN~1\AppData\Local\Temp\spark-d5b43891-274c-457d-aa3a-d79a536fd536\bloc
> kmgr-e980101b-4f93-455a-8a05-9185dcab9f8e
> 15/06/05 16:52:49 INFO storage.MemoryStore: MemoryStore started with capacity 
> 26
> 5.4 MB
> 15/06/05 16:52:49 INFO spark.HttpFileServer: HTTP File server directory is 
> C:\Us
> ers\KAVEEN~1\AppData\Local\Temp\spark-a35e3f17-641c-4ae3-90f2-51eac901b799\httpd
> -ecea93ad-c285-4c62-9222-01a9d6ff24e4
> 15/06/05 16:52:49 INFO spark.HttpServer: Starting HTTP Server
> 15/06/05 16:52:49 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/06/05 16:52:49 INFO server.AbstractConnector: Started 
> SocketConnector@0.0.0.0
> :52138
> 15/06/05 16:52:49 INFO util.Utils: Successfully started service 'HTTP file 
> serve
> r' on port 52138.
> 15/06/05 16:52:49 INFO spark.SparkEnv: Registering OutputCommitCoordinator
> 15/06/05 16:52:49 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/06/05 16:52:49 INFO server.AbstractConnector: Started 
> SelectChannelConnector@
> 0.0.0.0:4040
> 15/06/05 16:52:49 INFO util.Utils: Successfully started service 'SparkUI' on 
> por
> t 4040.
> 15/06/05 16:52:49 INFO ui.SparkUI: Started SparkUI at http://Master:4040
> 15/06/05 16:52:49 INFO client.RMProxy: Connecting to ResourceManager at 
> /0.0.0.0
> :8032
> java.lang.NullPointerException
> at org.apache.spark.sql.SQLContext.(SQLContext.scala:145)
> at org.apache.spark.sql.hive.HiveContext.(HiveContext.scala:49)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct
> orAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC
> onstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:10
> 27)
> at $iwC$$iwC.(:9)
> at $iwC.(:18)
> at (:20)
> at .(:24)
> at .()
> at .(:7)
> at .()
> at $print()
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
> java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
> sorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:
> 1065)
> at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:
> 1338)
> at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840
> )
> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
> at 
> org.apache.

[jira] [Updated] (SPARK-8155) support of proxy user not working in spaced username on windows

2015-06-08 Thread Kaveen Raajan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kaveen Raajan updated SPARK-8155:
-
Issue Type: Technical task  (was: Sub-task)

> support of proxy user not working in spaced username on windows
> ---
>
> Key: SPARK-8155
> URL: https://issues.apache.org/jira/browse/SPARK-8155
> Project: Spark
>  Issue Type: Technical task
>  Components: Spark Core
>Affects Versions: 1.3.1
> Environment: windows-8/7/server 2008
> Hadoop-2.5.2
> Java-1.7.51
> username - "kaveen raajan"
>Reporter: Kaveen Raajan
>
> I'm using SPARK1.3.1 on windows machine contain space on username (current 
> username-"kaveen raajan"). I tried to run following command
>  {code}spark-shell --master yarn-client --proxy-user SYSTEM {code}
> I able to run successfully on non-space user application also running in 
> SYSTEM user, But When I try to run in spaced user (kaveen raajan) mean it 
> throws following error.
> {code}
> 15/06/05 16:52:48 INFO spark.SecurityManager: Changing view acls to: SYSTEM
> 15/06/05 16:52:48 INFO spark.SecurityManager: Changing modify acls to: SYSTEM
> 15/06/05 16:52:48 INFO spark.SecurityManager: SecurityManager: authentication 
> di
> sabled; ui acls disabled; users with view permissions: Set(SYSTEM); users 
> with m
> odify permissions: Set(SYSTEM)
> 15/06/05 16:52:49 INFO slf4j.Slf4jLogger: Slf4jLogger started
> 15/06/05 16:52:49 INFO Remoting: Starting remoting
> 15/06/05 16:52:49 INFO Remoting: Remoting started; listening on addresses 
> :[akka
> .tcp://sparkDriver@Master:52137]
> 15/06/05 16:52:49 INFO util.Utils: Successfully started service 'sparkDriver' 
> on
>  port 52137.
> 15/06/05 16:52:49 INFO spark.SparkEnv: Registering MapOutputTracker
> 15/06/05 16:52:49 INFO spark.SparkEnv: Registering BlockManagerMaster
> 15/06/05 16:52:49 INFO storage.DiskBlockManager: Created local directory at 
> C:\U
> sers\KAVEEN~1\AppData\Local\Temp\spark-d5b43891-274c-457d-aa3a-d79a536fd536\bloc
> kmgr-e980101b-4f93-455a-8a05-9185dcab9f8e
> 15/06/05 16:52:49 INFO storage.MemoryStore: MemoryStore started with capacity 
> 26
> 5.4 MB
> 15/06/05 16:52:49 INFO spark.HttpFileServer: HTTP File server directory is 
> C:\Us
> ers\KAVEEN~1\AppData\Local\Temp\spark-a35e3f17-641c-4ae3-90f2-51eac901b799\httpd
> -ecea93ad-c285-4c62-9222-01a9d6ff24e4
> 15/06/05 16:52:49 INFO spark.HttpServer: Starting HTTP Server
> 15/06/05 16:52:49 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/06/05 16:52:49 INFO server.AbstractConnector: Started 
> SocketConnector@0.0.0.0
> :52138
> 15/06/05 16:52:49 INFO util.Utils: Successfully started service 'HTTP file 
> serve
> r' on port 52138.
> 15/06/05 16:52:49 INFO spark.SparkEnv: Registering OutputCommitCoordinator
> 15/06/05 16:52:49 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/06/05 16:52:49 INFO server.AbstractConnector: Started 
> SelectChannelConnector@
> 0.0.0.0:4040
> 15/06/05 16:52:49 INFO util.Utils: Successfully started service 'SparkUI' on 
> por
> t 4040.
> 15/06/05 16:52:49 INFO ui.SparkUI: Started SparkUI at http://Master:4040
> 15/06/05 16:52:49 INFO client.RMProxy: Connecting to ResourceManager at 
> /0.0.0.0
> :8032
> java.lang.NullPointerException
> at org.apache.spark.sql.SQLContext.(SQLContext.scala:145)
> at org.apache.spark.sql.hive.HiveContext.(HiveContext.scala:49)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct
> orAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC
> onstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:10
> 27)
> at $iwC$$iwC.(:9)
> at $iwC.(:18)
> at (:20)
> at .(:24)
> at .()
> at .(:7)
> at .()
> at $print()
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
> java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
> sorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:
> 1065)
> at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:
> 1338)
> at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840
> )
> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
> at 
> org.apache.spark.repl.SparkILoop.real

[jira] [Updated] (SPARK-8155) support of proxy user not working in spaced username on windows

2015-06-07 Thread Kaveen Raajan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kaveen Raajan updated SPARK-8155:
-
Fix Version/s: (was: 1.3.0)

> support of proxy user not working in spaced username on windows
> ---
>
> Key: SPARK-8155
> URL: https://issues.apache.org/jira/browse/SPARK-8155
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core
>Affects Versions: 1.3.1
> Environment: windows-8/7/server 2008
> Hadoop-2.5.2
> Java-1.7.51
> username - "kaveen raajan"
>Reporter: Kaveen Raajan
>
> I'm using SPARK1.3.1 on windows machine contain space on username (current 
> username-"kaveen raajan"). I tried to run following command
>  {code}spark-shell --master yarn-client --proxy-user SYSTEM {code}
> I able to run successfully on non-space user application also running in 
> SYSTEM user, But When I try to run in spaced user (kaveen raajan) mean it 
> throws following error.
> {code}
> 15/06/05 16:52:48 INFO spark.SecurityManager: Changing view acls to: SYSTEM
> 15/06/05 16:52:48 INFO spark.SecurityManager: Changing modify acls to: SYSTEM
> 15/06/05 16:52:48 INFO spark.SecurityManager: SecurityManager: authentication 
> di
> sabled; ui acls disabled; users with view permissions: Set(SYSTEM); users 
> with m
> odify permissions: Set(SYSTEM)
> 15/06/05 16:52:49 INFO slf4j.Slf4jLogger: Slf4jLogger started
> 15/06/05 16:52:49 INFO Remoting: Starting remoting
> 15/06/05 16:52:49 INFO Remoting: Remoting started; listening on addresses 
> :[akka
> .tcp://sparkDriver@Master:52137]
> 15/06/05 16:52:49 INFO util.Utils: Successfully started service 'sparkDriver' 
> on
>  port 52137.
> 15/06/05 16:52:49 INFO spark.SparkEnv: Registering MapOutputTracker
> 15/06/05 16:52:49 INFO spark.SparkEnv: Registering BlockManagerMaster
> 15/06/05 16:52:49 INFO storage.DiskBlockManager: Created local directory at 
> C:\U
> sers\KAVEEN~1\AppData\Local\Temp\spark-d5b43891-274c-457d-aa3a-d79a536fd536\bloc
> kmgr-e980101b-4f93-455a-8a05-9185dcab9f8e
> 15/06/05 16:52:49 INFO storage.MemoryStore: MemoryStore started with capacity 
> 26
> 5.4 MB
> 15/06/05 16:52:49 INFO spark.HttpFileServer: HTTP File server directory is 
> C:\Us
> ers\KAVEEN~1\AppData\Local\Temp\spark-a35e3f17-641c-4ae3-90f2-51eac901b799\httpd
> -ecea93ad-c285-4c62-9222-01a9d6ff24e4
> 15/06/05 16:52:49 INFO spark.HttpServer: Starting HTTP Server
> 15/06/05 16:52:49 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/06/05 16:52:49 INFO server.AbstractConnector: Started 
> SocketConnector@0.0.0.0
> :52138
> 15/06/05 16:52:49 INFO util.Utils: Successfully started service 'HTTP file 
> serve
> r' on port 52138.
> 15/06/05 16:52:49 INFO spark.SparkEnv: Registering OutputCommitCoordinator
> 15/06/05 16:52:49 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/06/05 16:52:49 INFO server.AbstractConnector: Started 
> SelectChannelConnector@
> 0.0.0.0:4040
> 15/06/05 16:52:49 INFO util.Utils: Successfully started service 'SparkUI' on 
> por
> t 4040.
> 15/06/05 16:52:49 INFO ui.SparkUI: Started SparkUI at http://Master:4040
> 15/06/05 16:52:49 INFO client.RMProxy: Connecting to ResourceManager at 
> /0.0.0.0
> :8032
> java.lang.NullPointerException
> at org.apache.spark.sql.SQLContext.(SQLContext.scala:145)
> at org.apache.spark.sql.hive.HiveContext.(HiveContext.scala:49)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct
> orAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC
> onstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:10
> 27)
> at $iwC$$iwC.(:9)
> at $iwC.(:18)
> at (:20)
> at .(:24)
> at .()
> at .(:7)
> at .()
> at $print()
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
> java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
> sorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:
> 1065)
> at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:
> 1338)
> at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840
> )
> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
> at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(Spar

[jira] [Created] (SPARK-8155) support of proxy user not working in spaced username on windows

2015-06-07 Thread Kaveen Raajan (JIRA)
Kaveen Raajan created SPARK-8155:


 Summary: support of proxy user not working in spaced username on 
windows
 Key: SPARK-8155
 URL: https://issues.apache.org/jira/browse/SPARK-8155
 Project: Spark
  Issue Type: Sub-task
Affects Versions: 1.3.1
 Environment: windows-8/7/server 2008
Hadoop-2.5.2
Java-1.7.51
username - "kaveen raajan"
Reporter: Kaveen Raajan


I'm using SPARK1.3.1 on windows machine contain space on username (current 
username-"kaveen raajan"). I tried to run following command
 {code}spark-shell --master yarn-client --proxy-user SYSTEM {code}
I able to run successfully on non-space user application also running in SYSTEM 
user, But When I try to run in spaced user (kaveen raajan) mean it throws 
following error.

{code}
15/06/05 16:52:48 INFO spark.SecurityManager: Changing view acls to: SYSTEM
15/06/05 16:52:48 INFO spark.SecurityManager: Changing modify acls to: SYSTEM
15/06/05 16:52:48 INFO spark.SecurityManager: SecurityManager: authentication di
sabled; ui acls disabled; users with view permissions: Set(SYSTEM); users with m
odify permissions: Set(SYSTEM)
15/06/05 16:52:49 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/06/05 16:52:49 INFO Remoting: Starting remoting
15/06/05 16:52:49 INFO Remoting: Remoting started; listening on addresses :[akka
.tcp://sparkDriver@Master:52137]
15/06/05 16:52:49 INFO util.Utils: Successfully started service 'sparkDriver' on
 port 52137.
15/06/05 16:52:49 INFO spark.SparkEnv: Registering MapOutputTracker
15/06/05 16:52:49 INFO spark.SparkEnv: Registering BlockManagerMaster
15/06/05 16:52:49 INFO storage.DiskBlockManager: Created local directory at C:\U
sers\KAVEEN~1\AppData\Local\Temp\spark-d5b43891-274c-457d-aa3a-d79a536fd536\bloc
kmgr-e980101b-4f93-455a-8a05-9185dcab9f8e
15/06/05 16:52:49 INFO storage.MemoryStore: MemoryStore started with capacity 26
5.4 MB
15/06/05 16:52:49 INFO spark.HttpFileServer: HTTP File server directory is C:\Us
ers\KAVEEN~1\AppData\Local\Temp\spark-a35e3f17-641c-4ae3-90f2-51eac901b799\httpd
-ecea93ad-c285-4c62-9222-01a9d6ff24e4
15/06/05 16:52:49 INFO spark.HttpServer: Starting HTTP Server
15/06/05 16:52:49 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/06/05 16:52:49 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0
:52138
15/06/05 16:52:49 INFO util.Utils: Successfully started service 'HTTP file serve
r' on port 52138.
15/06/05 16:52:49 INFO spark.SparkEnv: Registering OutputCommitCoordinator
15/06/05 16:52:49 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/06/05 16:52:49 INFO server.AbstractConnector: Started SelectChannelConnector@
0.0.0.0:4040
15/06/05 16:52:49 INFO util.Utils: Successfully started service 'SparkUI' on por
t 4040.
15/06/05 16:52:49 INFO ui.SparkUI: Started SparkUI at http://Master:4040
15/06/05 16:52:49 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0
:8032

java.lang.NullPointerException
at org.apache.spark.sql.SQLContext.(SQLContext.scala:145)
at org.apache.spark.sql.hive.HiveContext.(HiveContext.scala:49)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct
orAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC
onstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:10
27)
at $iwC$$iwC.(:9)
at $iwC.(:18)
at (:20)
at .(:24)
at .()
at .(:7)
at .()
at $print()
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:
1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:
1338)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840
)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:8
56)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.sca
la:901)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:813)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply
(SparkILoopInit.scala:130)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply
(SparkILoopInit.scala:122)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala

[jira] [Comment Edited] (SPARK-5493) Support proxy users under kerberos

2015-06-05 Thread Kaveen Raajan (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-5493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14574359#comment-14574359
 ] 

Kaveen Raajan edited comment on SPARK-5493 at 6/5/15 12:12 PM:
---

I'm using *SPARK1.3.1* on *windows machine* contain space on username (current 
username-"kaveen raajan"). I tried to run following command

{code} spark-shell --master yarn-client --proxy-user SYSTEM {code}

I able to run successfully on non-space user application also running in SYSTEM 
user, But When I try to run in spaced user (kaveen raajan) mean it throws 
following error.

{code}
15/06/05 16:52:48 INFO spark.SecurityManager: Changing view acls to: SYSTEM
15/06/05 16:52:48 INFO spark.SecurityManager: Changing modify acls to: SYSTEM
15/06/05 16:52:48 INFO spark.SecurityManager: SecurityManager: authentication di
sabled; ui acls disabled; users with view permissions: Set(SYSTEM); users with m
odify permissions: Set(SYSTEM)
15/06/05 16:52:49 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/06/05 16:52:49 INFO Remoting: Starting remoting
15/06/05 16:52:49 INFO Remoting: Remoting started; listening on addresses :[akka
.tcp://sparkDriver@Master:52137]
15/06/05 16:52:49 INFO util.Utils: Successfully started service 'sparkDriver' on
 port 52137.
15/06/05 16:52:49 INFO spark.SparkEnv: Registering MapOutputTracker
15/06/05 16:52:49 INFO spark.SparkEnv: Registering BlockManagerMaster
15/06/05 16:52:49 INFO storage.DiskBlockManager: Created local directory at C:\U
sers\KAVEEN~1\AppData\Local\Temp\spark-d5b43891-274c-457d-aa3a-d79a536fd536\bloc
kmgr-e980101b-4f93-455a-8a05-9185dcab9f8e
15/06/05 16:52:49 INFO storage.MemoryStore: MemoryStore started with capacity 26
5.4 MB
15/06/05 16:52:49 INFO spark.HttpFileServer: HTTP File server directory is C:\Us
ers\KAVEEN~1\AppData\Local\Temp\spark-a35e3f17-641c-4ae3-90f2-51eac901b799\httpd
-ecea93ad-c285-4c62-9222-01a9d6ff24e4
15/06/05 16:52:49 INFO spark.HttpServer: Starting HTTP Server
15/06/05 16:52:49 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/06/05 16:52:49 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0
:52138
15/06/05 16:52:49 INFO util.Utils: Successfully started service 'HTTP file serve
r' on port 52138.
15/06/05 16:52:49 INFO spark.SparkEnv: Registering OutputCommitCoordinator
15/06/05 16:52:49 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/06/05 16:52:49 INFO server.AbstractConnector: Started SelectChannelConnector@
0.0.0.0:4040
15/06/05 16:52:49 INFO util.Utils: Successfully started service 'SparkUI' on por
t 4040.
15/06/05 16:52:49 INFO ui.SparkUI: Started SparkUI at http://Master:4040
15/06/05 16:52:49 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0
:8032

java.lang.NullPointerException
at org.apache.spark.sql.SQLContext.(SQLContext.scala:145)
at org.apache.spark.sql.hive.HiveContext.(HiveContext.scala:49)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct
orAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC
onstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:10
27)
at $iwC$$iwC.(:9)
at $iwC.(:18)
at (:20)
at .(:24)
at .()
at .(:7)
at .()
at $print()
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:
1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:
1338)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840
)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:8
56)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.sca
la:901)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:813)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply
(SparkILoopInit.scala:130)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply
(SparkILoopInit.scala:122)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoop
Init.scala:122)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.sca

[jira] [Comment Edited] (SPARK-5493) Support proxy users under kerberos

2015-06-05 Thread Kaveen Raajan (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-5493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14574359#comment-14574359
 ] 

Kaveen Raajan edited comment on SPARK-5493 at 6/5/15 11:43 AM:
---

I'm using *SPARK-1.3.1* on *windows machine* contain space on username (current 
username-"kaveen raajan"). I tried to run following command

{code} spark-shell --master yarn-client --proxy-user SYSTEM {code}

I able to run successfully on non-space user application also running in SYSTEM 
user, But When I try to run in spaced user (kaveen raajan) mean it throws 
following error.

{code}
15/06/05 16:52:48 INFO spark.SecurityManager: Changing view acls to: SYSTEM
15/06/05 16:52:48 INFO spark.SecurityManager: Changing modify acls to: SYSTEM
15/06/05 16:52:48 INFO spark.SecurityManager: SecurityManager: authentication di
sabled; ui acls disabled; users with view permissions: Set(SYSTEM); users with m
odify permissions: Set(SYSTEM)
15/06/05 16:52:49 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/06/05 16:52:49 INFO Remoting: Starting remoting
15/06/05 16:52:49 INFO Remoting: Remoting started; listening on addresses :[akka
.tcp://sparkDriver@Master:52137]
15/06/05 16:52:49 INFO util.Utils: Successfully started service 'sparkDriver' on
 port 52137.
15/06/05 16:52:49 INFO spark.SparkEnv: Registering MapOutputTracker
15/06/05 16:52:49 INFO spark.SparkEnv: Registering BlockManagerMaster
15/06/05 16:52:49 INFO storage.DiskBlockManager: Created local directory at C:\U
sers\KAVEEN~1\AppData\Local\Temp\spark-d5b43891-274c-457d-aa3a-d79a536fd536\bloc
kmgr-e980101b-4f93-455a-8a05-9185dcab9f8e
15/06/05 16:52:49 INFO storage.MemoryStore: MemoryStore started with capacity 26
5.4 MB
15/06/05 16:52:49 INFO spark.HttpFileServer: HTTP File server directory is C:\Us
ers\KAVEEN~1\AppData\Local\Temp\spark-a35e3f17-641c-4ae3-90f2-51eac901b799\httpd
-ecea93ad-c285-4c62-9222-01a9d6ff24e4
15/06/05 16:52:49 INFO spark.HttpServer: Starting HTTP Server
15/06/05 16:52:49 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/06/05 16:52:49 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0
:52138
15/06/05 16:52:49 INFO util.Utils: Successfully started service 'HTTP file serve
r' on port 52138.
15/06/05 16:52:49 INFO spark.SparkEnv: Registering OutputCommitCoordinator
15/06/05 16:52:49 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/06/05 16:52:49 INFO server.AbstractConnector: Started SelectChannelConnector@
0.0.0.0:4040
15/06/05 16:52:49 INFO util.Utils: Successfully started service 'SparkUI' on por
t 4040.
15/06/05 16:52:49 INFO ui.SparkUI: Started SparkUI at http://Master:4040
15/06/05 16:52:49 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0
:8032

java.lang.NullPointerException
at org.apache.spark.sql.SQLContext.(SQLContext.scala:145)
at org.apache.spark.sql.hive.HiveContext.(HiveContext.scala:49)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct
orAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC
onstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:10
27)
at $iwC$$iwC.(:9)
at $iwC.(:18)
at (:20)
at .(:24)
at .()
at .(:7)
at .()
at $print()
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:
1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:
1338)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840
)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:8
56)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.sca
la:901)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:813)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply
(SparkILoopInit.scala:130)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply
(SparkILoopInit.scala:122)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoop
Init.scala:122)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.sc

[jira] [Commented] (SPARK-5493) Support proxy users under kerberos

2015-06-05 Thread Kaveen Raajan (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-5493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14574359#comment-14574359
 ] 

Kaveen Raajan commented on SPARK-5493:
--

I'm using *SPARK-1.3.1* on *windows machine* contain space on username (current 
username-"kaveen raajan"). I tried to run following command

{code} spark-shell --master yarn-client --proxy-user SYSTEM {code}

I able to run successfully on non-space user application also running in SYSTEM 
user, But When I try to run in spaced user (kaveen raajan) mean it throws 
following error.

{code}
15/06/05 16:52:48 INFO spark.SecurityManager: Changing view acls to: SYSTEM
15/06/05 16:52:48 INFO spark.SecurityManager: Changing modify acls to: SYSTEM
15/06/05 16:52:48 INFO spark.SecurityManager: SecurityManager: authentication di
sabled; ui acls disabled; users with view permissions: Set(SYSTEM); users with m
odify permissions: Set(SYSTEM)
15/06/05 16:52:49 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/06/05 16:52:49 INFO Remoting: Starting remoting
15/06/05 16:52:49 INFO Remoting: Remoting started; listening on addresses :[akka
.tcp://sparkDriver@synclapn3408.CONTOSO:52137]
15/06/05 16:52:49 INFO util.Utils: Successfully started service 'sparkDriver' on
 port 52137.
15/06/05 16:52:49 INFO spark.SparkEnv: Registering MapOutputTracker
15/06/05 16:52:49 INFO spark.SparkEnv: Registering BlockManagerMaster
15/06/05 16:52:49 INFO storage.DiskBlockManager: Created local directory at C:\U
sers\KAVEEN~1\AppData\Local\Temp\spark-d5b43891-274c-457d-aa3a-d79a536fd536\bloc
kmgr-e980101b-4f93-455a-8a05-9185dcab9f8e
15/06/05 16:52:49 INFO storage.MemoryStore: MemoryStore started with capacity 26
5.4 MB
15/06/05 16:52:49 INFO spark.HttpFileServer: HTTP File server directory is C:\Us
ers\KAVEEN~1\AppData\Local\Temp\spark-a35e3f17-641c-4ae3-90f2-51eac901b799\httpd
-ecea93ad-c285-4c62-9222-01a9d6ff24e4
15/06/05 16:52:49 INFO spark.HttpServer: Starting HTTP Server
15/06/05 16:52:49 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/06/05 16:52:49 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0
:52138
15/06/05 16:52:49 INFO util.Utils: Successfully started service 'HTTP file serve
r' on port 52138.
15/06/05 16:52:49 INFO spark.SparkEnv: Registering OutputCommitCoordinator
15/06/05 16:52:49 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/06/05 16:52:49 INFO server.AbstractConnector: Started SelectChannelConnector@
0.0.0.0:4040
15/06/05 16:52:49 INFO util.Utils: Successfully started service 'SparkUI' on por
t 4040.
15/06/05 16:52:49 INFO ui.SparkUI: Started SparkUI at http://synclapn3408.CONTOS
O:4040
15/06/05 16:52:49 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0
:8032

java.lang.NullPointerException
at org.apache.spark.sql.SQLContext.(SQLContext.scala:145)
at org.apache.spark.sql.hive.HiveContext.(HiveContext.scala:49)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct
orAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC
onstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:10
27)
at $iwC$$iwC.(:9)
at $iwC.(:18)
at (:20)
at .(:24)
at .()
at .(:7)
at .()
at $print()
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:
1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:
1338)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840
)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:8
56)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.sca
la:901)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:813)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply
(SparkILoopInit.scala:130)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply
(SparkILoopInit.scala:122)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoop
Init.scala:122)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)

at 

[jira] [Commented] (SPARK-7700) Spark 1.3.0 on YARN: Application failed 2 times due to AM Container

2015-06-02 Thread Kaveen Raajan (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14569016#comment-14569016
 ] 

Kaveen Raajan commented on SPARK-7700:
--

I used Patch-1.patch spark working fine.

> Spark 1.3.0 on YARN: Application failed 2 times due to AM Container
> ---
>
> Key: SPARK-7700
> URL: https://issues.apache.org/jira/browse/SPARK-7700
> Project: Spark
>  Issue Type: Story
>  Components: Build
>Affects Versions: 1.3.1
> Environment: windows 8 Single language
> Hadoop-2.5.2
> Protocol Buffer-2.5.0
> Scala-2.11
>Reporter: Kaveen Raajan
> Attachments: Patch-1.patch
>
>
> I build SPARK on yarn mode by giving following command. Build got succeeded.
> {panel}
> mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -Phive -Phive-0.12.0 
> -Phive-thriftserver -DskipTests clean package
> {panel}
> I set following property at spark-env.cmd file
> {panel}
> SET SPARK_JAR=hdfs://master:9000/user/spark/jar
> {panel}
> *Note:*  spark jar files are moved to hdfs specified location. Also spark 
> classpath are added to hadoop-config.cmd and HADOOP_CONF_DIR are set at 
> enviroment variable.
> I tried to execute following SparkPi example in yarn-cluster mode.
> {panel}
> spark-submit --class org.apache.spark.examples.SparkPi --master yarn-cluster 
> --num-executors 3 --driver-memory 4g --executor-memory 2g --executor-cores 1 
> --queue default 
> S:\Hadoop\Spark\spark-1.3.1\examples\target\spark-examples_2.10-1.3.1.jar 10
> {panel}
> My job able to submit at hadoop cluster, but it always in accepted state and 
> Failed with following error
> {panel}
> 15/05/14 13:00:51 INFO client.RMProxy: Connecting to ResourceManager at 
> /0.0.0.0:8032
> 15/05/14 13:00:51 INFO yarn.Client: Requesting a new application from cluster 
> with 1 NodeManagers
> 15/05/14 13:00:51 INFO yarn.Client: Verifying our application has not 
> requestedmore than the maximum memory capability of the cluster (8048 MB per 
> container)
> 15/05/14 13:00:51 INFO yarn.Client: Will allocate AM container, with 4480 MB 
> memory including 384 MB overhead
> 15/05/14 13:00:51 INFO yarn.Client: Setting up container launch context for 
> ourAM
> 15/05/14 13:00:51 INFO yarn.Client: Preparing resources for our AM container
> 15/05/14 13:00:52 INFO yarn.Client: Source and destination file systems are 
> thesame. Not copying hdfs://master:9000/user/spark/jar
> 15/05/14 13:00:52 INFO yarn.Client: Uploading resource 
> file:/S:/Hadoop/Spark/spark-1.3.1/examples/target/spark-examples_2.10-1.3.1.jar
>  -> 
> hdfs://master:9000/user/HDFS/.sparkStaging/application_1431587916618_0003/spark-examples_2.10-1.3.1.jar
> 15/05/14 13:00:52 INFO yarn.Client: Setting up the launch environment for our 
> AM container
> 15/05/14 13:00:52 INFO spark.SecurityManager: Changing view acls to: HDFS
> 15/05/14 13:00:52 INFO spark.SecurityManager: Changing modify acls to: HDFS
> 15/05/14 13:00:52 INFO spark.SecurityManager: SecurityManager: authentication 
> disabled; ui acls disabled; users with view permissions: Set(HDFS); users 
> with modify permissions: Set(HDFS)
> 15/05/14 13:00:52 INFO yarn.Client: Submitting application 3 to 
> ResourceManager
> 15/05/14 13:00:52 INFO impl.YarnClientImpl: Submitted application 
> application_1431587916618_0003
> 15/05/14 13:00:53 INFO yarn.Client: Application report for 
> application_1431587916618_0003 (state: ACCEPTED)
> 15/05/14 13:00:53 INFO yarn.Client:
>  client token: N/A
>  diagnostics: N/A
>  ApplicationMaster host: N/A
>  ApplicationMaster RPC port: -1
>  queue: default
>  start time: 1431588652790
>  final status: UNDEFINED
>  tracking URL: 
> http://master:8088/proxy/application_1431587916618_0003/
>  user: HDFS
> 15/05/14 13:00:54 INFO yarn.Client: Application report for 
> application_1431587916618_0003 (state: ACCEPTED)
> 15/05/14 13:00:55 INFO yarn.Client: Application report for 
> application_1431587916618_0003 (state: ACCEPTED)
> 15/05/14 13:00:56 INFO yarn.Client: Application report for 
> application_1431587916618_0003 (state: ACCEPTED)
> 15/05/14 13:00:57 INFO yarn.Client: Application report for 
> application_1431587916618_0003 (state: ACCEPTED)
> 15/05/14 13:00:58 INFO yarn.Client: Application report for 
> application_1431587916618_0003 (state: ACCEPTED)
> 15/05/14 13:00:59 INFO yarn.Client: Application report for 
> application_1431587916618_0003 (state: FAILED)
> 15/05/14 13:00:59 INFO yarn.Client:
>  client token: N/A
>  diagnostics: Application application_1431587916618_0003 failed 2 
> times
> due to AM Container for appattempt_1431587916618_0003_02 exited with  
> exitCode: 1
> For more detailed output, check application tracking 
> page:http://master:8088/proxy/application_143158

[jira] [Updated] (SPARK-7700) Spark 1.3.0 on YARN: Application failed 2 times due to AM Container

2015-06-02 Thread Kaveen Raajan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-7700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kaveen Raajan updated SPARK-7700:
-
Attachment: Patch-1.patch

> Spark 1.3.0 on YARN: Application failed 2 times due to AM Container
> ---
>
> Key: SPARK-7700
> URL: https://issues.apache.org/jira/browse/SPARK-7700
> Project: Spark
>  Issue Type: Story
>  Components: Build
>Affects Versions: 1.3.1
> Environment: windows 8 Single language
> Hadoop-2.5.2
> Protocol Buffer-2.5.0
> Scala-2.11
>Reporter: Kaveen Raajan
> Attachments: Patch-1.patch
>
>
> I build SPARK on yarn mode by giving following command. Build got succeeded.
> {panel}
> mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -Phive -Phive-0.12.0 
> -Phive-thriftserver -DskipTests clean package
> {panel}
> I set following property at spark-env.cmd file
> {panel}
> SET SPARK_JAR=hdfs://master:9000/user/spark/jar
> {panel}
> *Note:*  spark jar files are moved to hdfs specified location. Also spark 
> classpath are added to hadoop-config.cmd and HADOOP_CONF_DIR are set at 
> enviroment variable.
> I tried to execute following SparkPi example in yarn-cluster mode.
> {panel}
> spark-submit --class org.apache.spark.examples.SparkPi --master yarn-cluster 
> --num-executors 3 --driver-memory 4g --executor-memory 2g --executor-cores 1 
> --queue default 
> S:\Hadoop\Spark\spark-1.3.1\examples\target\spark-examples_2.10-1.3.1.jar 10
> {panel}
> My job able to submit at hadoop cluster, but it always in accepted state and 
> Failed with following error
> {panel}
> 15/05/14 13:00:51 INFO client.RMProxy: Connecting to ResourceManager at 
> /0.0.0.0:8032
> 15/05/14 13:00:51 INFO yarn.Client: Requesting a new application from cluster 
> with 1 NodeManagers
> 15/05/14 13:00:51 INFO yarn.Client: Verifying our application has not 
> requestedmore than the maximum memory capability of the cluster (8048 MB per 
> container)
> 15/05/14 13:00:51 INFO yarn.Client: Will allocate AM container, with 4480 MB 
> memory including 384 MB overhead
> 15/05/14 13:00:51 INFO yarn.Client: Setting up container launch context for 
> ourAM
> 15/05/14 13:00:51 INFO yarn.Client: Preparing resources for our AM container
> 15/05/14 13:00:52 INFO yarn.Client: Source and destination file systems are 
> thesame. Not copying hdfs://master:9000/user/spark/jar
> 15/05/14 13:00:52 INFO yarn.Client: Uploading resource 
> file:/S:/Hadoop/Spark/spark-1.3.1/examples/target/spark-examples_2.10-1.3.1.jar
>  -> 
> hdfs://master:9000/user/HDFS/.sparkStaging/application_1431587916618_0003/spark-examples_2.10-1.3.1.jar
> 15/05/14 13:00:52 INFO yarn.Client: Setting up the launch environment for our 
> AM container
> 15/05/14 13:00:52 INFO spark.SecurityManager: Changing view acls to: HDFS
> 15/05/14 13:00:52 INFO spark.SecurityManager: Changing modify acls to: HDFS
> 15/05/14 13:00:52 INFO spark.SecurityManager: SecurityManager: authentication 
> disabled; ui acls disabled; users with view permissions: Set(HDFS); users 
> with modify permissions: Set(HDFS)
> 15/05/14 13:00:52 INFO yarn.Client: Submitting application 3 to 
> ResourceManager
> 15/05/14 13:00:52 INFO impl.YarnClientImpl: Submitted application 
> application_1431587916618_0003
> 15/05/14 13:00:53 INFO yarn.Client: Application report for 
> application_1431587916618_0003 (state: ACCEPTED)
> 15/05/14 13:00:53 INFO yarn.Client:
>  client token: N/A
>  diagnostics: N/A
>  ApplicationMaster host: N/A
>  ApplicationMaster RPC port: -1
>  queue: default
>  start time: 1431588652790
>  final status: UNDEFINED
>  tracking URL: 
> http://master:8088/proxy/application_1431587916618_0003/
>  user: HDFS
> 15/05/14 13:00:54 INFO yarn.Client: Application report for 
> application_1431587916618_0003 (state: ACCEPTED)
> 15/05/14 13:00:55 INFO yarn.Client: Application report for 
> application_1431587916618_0003 (state: ACCEPTED)
> 15/05/14 13:00:56 INFO yarn.Client: Application report for 
> application_1431587916618_0003 (state: ACCEPTED)
> 15/05/14 13:00:57 INFO yarn.Client: Application report for 
> application_1431587916618_0003 (state: ACCEPTED)
> 15/05/14 13:00:58 INFO yarn.Client: Application report for 
> application_1431587916618_0003 (state: ACCEPTED)
> 15/05/14 13:00:59 INFO yarn.Client: Application report for 
> application_1431587916618_0003 (state: FAILED)
> 15/05/14 13:00:59 INFO yarn.Client:
>  client token: N/A
>  diagnostics: Application application_1431587916618_0003 failed 2 
> times
> due to AM Container for appattempt_1431587916618_0003_02 exited with  
> exitCode: 1
> For more detailed output, check application tracking 
> page:http://master:8088/proxy/application_1431587916618_0003/Then, click on 
> links to logs of each attempt.
> Diagn

[jira] [Updated] (SPARK-5389) spark-shell.cmd does not run from DOS Windows 7

2015-06-02 Thread Kaveen Raajan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kaveen Raajan updated SPARK-5389:
-
Attachment: (was: Patch-1.patch)

> spark-shell.cmd does not run from DOS Windows 7
> ---
>
> Key: SPARK-5389
> URL: https://issues.apache.org/jira/browse/SPARK-5389
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark, Spark Shell, Windows
>Affects Versions: 1.2.0
> Environment: Windows 7
>Reporter: Yana Kadiyska
> Attachments: SparkShell_Win7.JPG, spark_bug.png
>
>
> spark-shell.cmd crashes in DOS prompt Windows 7. Works fine under PowerShell. 
> spark-shell.cmd works fine for me in v.1.1 so this is new in spark1.2
> Marking as trivial since calling spark-shell2.cmd also works fine
> Attaching a screenshot since the error isn't very useful:
> {code}
> spark-1.2.0-bin-cdh4>bin\spark-shell.cmd
> else was unexpected at this time.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-5389) spark-shell.cmd does not run from DOS Windows 7

2015-06-02 Thread Kaveen Raajan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kaveen Raajan updated SPARK-5389:
-
Attachment: Patch-1.patch

> spark-shell.cmd does not run from DOS Windows 7
> ---
>
> Key: SPARK-5389
> URL: https://issues.apache.org/jira/browse/SPARK-5389
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark, Spark Shell, Windows
>Affects Versions: 1.2.0
> Environment: Windows 7
>Reporter: Yana Kadiyska
> Attachments: Patch-1.patch, SparkShell_Win7.JPG, spark_bug.png
>
>
> spark-shell.cmd crashes in DOS prompt Windows 7. Works fine under PowerShell. 
> spark-shell.cmd works fine for me in v.1.1 so this is new in spark1.2
> Marking as trivial since calling spark-shell2.cmd also works fine
> Attaching a screenshot since the error isn't very useful:
> {code}
> spark-1.2.0-bin-cdh4>bin\spark-shell.cmd
> else was unexpected at this time.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-5389) spark-shell.cmd does not run from DOS Windows 7

2015-05-20 Thread Kaveen Raajan (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-5389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14553733#comment-14553733
 ] 

Kaveen Raajan commented on SPARK-5389:
--

I had the same problem. I found the root cause for this issue. Your JAVA_HOME 
is not set correctly, Please set this in environment variable correctly. Make 
sure your java path is not containing any space.

OS - Windows 8 and Windows server 2008

> spark-shell.cmd does not run from DOS Windows 7
> ---
>
> Key: SPARK-5389
> URL: https://issues.apache.org/jira/browse/SPARK-5389
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark, Spark Shell, Windows
>Affects Versions: 1.2.0
> Environment: Windows 7
>Reporter: Yana Kadiyska
> Attachments: SparkShell_Win7.JPG, spark_bug.png
>
>
> spark-shell.cmd crashes in DOS prompt Windows 7. Works fine under PowerShell. 
> spark-shell.cmd works fine for me in v.1.1 so this is new in spark1.2
> Marking as trivial since calling spark-shell2.cmd also works fine
> Attaching a screenshot since the error isn't very useful:
> {code}
> spark-1.2.0-bin-cdh4>bin\spark-shell.cmd
> else was unexpected at this time.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-7700) Spark 1.3.0 on YARN: Application failed 2 times due to AM Container

2015-05-19 Thread Kaveen Raajan (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14550436#comment-14550436
 ] 

Kaveen Raajan edited comment on SPARK-7700 at 5/19/15 2:13 PM:
---

Hi [~srowen],
Thanks for the update.
Now I able to run Spark job successfully on both yarn-client and yarn-cluster 
mode. Here I changed following in Spark code.

Replacing single quotes as double quotes
*Line Number - 163*
{code:title=YarnSparkHadoopUtil.scala|borderStyle=solid}
  def escapeForShell(arg: String): String = {
if (arg != null) {
  val escaped = new StringBuilder("'")
  for (i <- 0 to arg.length() - 1) {
arg.charAt(i) match {
  case '$' => escaped.append("\\$")
  case '"' => escaped.append("\\\"")
  case '\'' => escaped.append("'\\''")
  case c => escaped.append(c)
}
  }
  escaped.append("'").toString()
} else {
  arg
}
  }
{code}

After changing above changes, we faced ISSUE – 
{color:red}Error: Could not find or load main class 
PWD.Hadoop.logs.userlogs.application_1431950105623_0001.container_1431950105623_0001_01_04{color}

To resolve this we identified the issue reproducing location from Spark source 
as,
Remove this " -XX:OnOutOfMemoryError='kill %p'" from commands.

*Line Number - 213*
{code:title=ExecutorRunnable.scala|borderStyle=solid}
val commands = prefixEnv ++ Seq(
  YarnSparkHadoopUtil.expandEnvironment(Environment.JAVA_HOME) + 
"/bin/java",
  "-server",
  // Kill if OOM is raised - leverage yarn's failure handling to cause 
rescheduling.
  // Not killing the task leaves various aspects of the executor and (to 
some extent) the jvm in
  // an inconsistent state.
  // TODO: If the OOM is not recoverable by rescheduling it on different 
node, then do
  // 'something' to fail job ... akin to blacklisting trackers in mapred ?
  " -XX:OnOutOfMemoryError='kill %p'") ++
  javaOpts ++
  Seq("org.apache.spark.executor.CoarseGrainedExecutorBackend",
"--driver-url", masterAddress.toString,
"--executor-id", slaveId.toString,
"--hostname", hostname.toString,
"--cores", executorCores.toString,
"--app-id", appId) ++
  userClassPath ++
  Seq(
"1>", ApplicationConstants.LOG_DIR_EXPANSION_VAR + "/stdout",
"2>", ApplicationConstants.LOG_DIR_EXPANSION_VAR + "/stderr") 
{code}
Now We able to run all spark job in both yarn-cluster and yarn-client mode.

I have a random question that 
Is there any equivalent patch available for following changes?
why windows not accepted single quotes? and
What the reason for giving this line " -XX:OnOutOfMemoryError='kill %p'"?


was (Author: kaveenbigdata):
Hi [~srowen],
Thanks for the update.
Now I able to run Spark job successfully on both yarn-client and yarn-cluster 
mode. Here I changed following in Spark code.

Replacing single quotes as double quotes
*Line Number - 163*
{code:title=YarnSparkHadoopUtil.scala|borderStyle=solid}
  def escapeForShell(arg: String): String = {
if (arg != null) {
  val escaped = new StringBuilder("'")
  for (i <- 0 to arg.length() - 1) {
arg.charAt(i) match {
  case '$' => escaped.append("\\$")
  case '"' => escaped.append("\\\"")
  case '\'' => escaped.append("'\\''")
  case c => escaped.append(c)
}
  }
  escaped.append("'").toString()
} else {
  arg
}
  }
{code}

After changing above changes, we faced ISSUE – 
{color:red}Error: Could not find or load main class 
PWD.Syncfusion.BigDataSDK.2.1.0.70.SDK.Hadoop.logs.userlogs.application_1431950105623_0001.container_1431950105623_0001_01_04{color}

To resolve this we identified the issue reproducing location from Spark source 
as,
Remove this " -XX:OnOutOfMemoryError='kill %p'" from commands.

*Line Number - 213*
{code:title=ExecutorRunnable.scala|borderStyle=solid}
val commands = prefixEnv ++ Seq(
  YarnSparkHadoopUtil.expandEnvironment(Environment.JAVA_HOME) + 
"/bin/java",
  "-server",
  // Kill if OOM is raised - leverage yarn's failure handling to cause 
rescheduling.
  // Not killing the task leaves various aspects of the executor and (to 
some extent) the jvm in
  // an inconsistent state.
  // TODO: If the OOM is not recoverable by rescheduling it on different 
node, then do
  // 'something' to fail job ... akin to blacklisting trackers in mapred ?
  " -XX:OnOutOfMemoryError='kill %p'") ++
  javaOpts ++
  Seq("org.apache.spark.executor.CoarseGrainedExecutorBackend",
"--driver-url", masterAddress.toString,
"--executor-id", slaveId.toString,
"--hostname", hostname.toString,
"--cores", executorCores.toString,
"--app-id", appId) ++
  userClassPath ++
  Seq(
"1>", ApplicationConstants.LOG_DIR_EXPANSION_VAR + "/stdout",
 

[jira] [Commented] (SPARK-7700) Spark 1.3.0 on YARN: Application failed 2 times due to AM Container

2015-05-19 Thread Kaveen Raajan (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14550436#comment-14550436
 ] 

Kaveen Raajan commented on SPARK-7700:
--

Hi [~srowen],
Thanks for the update.
Now I able to run Spark job successfully on both yarn-client and yarn-cluster 
mode. Here I changed following in Spark code.

Replacing single quotes as double quotes
*Line Number - 163*
{code:title=YarnSparkHadoopUtil.scala|borderStyle=solid}
  def escapeForShell(arg: String): String = {
if (arg != null) {
  val escaped = new StringBuilder("'")
  for (i <- 0 to arg.length() - 1) {
arg.charAt(i) match {
  case '$' => escaped.append("\\$")
  case '"' => escaped.append("\\\"")
  case '\'' => escaped.append("'\\''")
  case c => escaped.append(c)
}
  }
  escaped.append("'").toString()
} else {
  arg
}
  }
{code}

After changing above changes, we faced ISSUE – 
{color:red}Error: Could not find or load main class 
PWD.Syncfusion.BigDataSDK.2.1.0.70.SDK.Hadoop.logs.userlogs.application_1431950105623_0001.container_1431950105623_0001_01_04{color}

To resolve this we identified the issue reproducing location from Spark source 
as,
Remove this " -XX:OnOutOfMemoryError='kill %p'" from commands.

*Line Number - 213*
{code:title=ExecutorRunnable.scala|borderStyle=solid}
val commands = prefixEnv ++ Seq(
  YarnSparkHadoopUtil.expandEnvironment(Environment.JAVA_HOME) + 
"/bin/java",
  "-server",
  // Kill if OOM is raised - leverage yarn's failure handling to cause 
rescheduling.
  // Not killing the task leaves various aspects of the executor and (to 
some extent) the jvm in
  // an inconsistent state.
  // TODO: If the OOM is not recoverable by rescheduling it on different 
node, then do
  // 'something' to fail job ... akin to blacklisting trackers in mapred ?
  " -XX:OnOutOfMemoryError='kill %p'") ++
  javaOpts ++
  Seq("org.apache.spark.executor.CoarseGrainedExecutorBackend",
"--driver-url", masterAddress.toString,
"--executor-id", slaveId.toString,
"--hostname", hostname.toString,
"--cores", executorCores.toString,
"--app-id", appId) ++
  userClassPath ++
  Seq(
"1>", ApplicationConstants.LOG_DIR_EXPANSION_VAR + "/stdout",
"2>", ApplicationConstants.LOG_DIR_EXPANSION_VAR + "/stderr") 
{code}
Now We able to run all spark job in both yarn-cluster and yarn-client mode.

I have a random question that 
Is there any equivalent patch available for following changes?
why windows not accepted single quotes? and
What the reason for giving this line " -XX:OnOutOfMemoryError='kill %p'"?

> Spark 1.3.0 on YARN: Application failed 2 times due to AM Container
> ---
>
> Key: SPARK-7700
> URL: https://issues.apache.org/jira/browse/SPARK-7700
> Project: Spark
>  Issue Type: Story
>  Components: Build
>Affects Versions: 1.3.1
> Environment: windows 8 Single language
> Hadoop-2.5.2
> Protocol Buffer-2.5.0
> Scala-2.11
>Reporter: Kaveen Raajan
>
> I build SPARK on yarn mode by giving following command. Build got succeeded.
> {panel}
> mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -Phive -Phive-0.12.0 
> -Phive-thriftserver -DskipTests clean package
> {panel}
> I set following property at spark-env.cmd file
> {panel}
> SET SPARK_JAR=hdfs://master:9000/user/spark/jar
> {panel}
> *Note:*  spark jar files are moved to hdfs specified location. Also spark 
> classpath are added to hadoop-config.cmd and HADOOP_CONF_DIR are set at 
> enviroment variable.
> I tried to execute following SparkPi example in yarn-cluster mode.
> {panel}
> spark-submit --class org.apache.spark.examples.SparkPi --master yarn-cluster 
> --num-executors 3 --driver-memory 4g --executor-memory 2g --executor-cores 1 
> --queue default 
> S:\Hadoop\Spark\spark-1.3.1\examples\target\spark-examples_2.10-1.3.1.jar 10
> {panel}
> My job able to submit at hadoop cluster, but it always in accepted state and 
> Failed with following error
> {panel}
> 15/05/14 13:00:51 INFO client.RMProxy: Connecting to ResourceManager at 
> /0.0.0.0:8032
> 15/05/14 13:00:51 INFO yarn.Client: Requesting a new application from cluster 
> with 1 NodeManagers
> 15/05/14 13:00:51 INFO yarn.Client: Verifying our application has not 
> requestedmore than the maximum memory capability of the cluster (8048 MB per 
> container)
> 15/05/14 13:00:51 INFO yarn.Client: Will allocate AM container, with 4480 MB 
> memory including 384 MB overhead
> 15/05/14 13:00:51 INFO yarn.Client: Setting up container launch context for 
> ourAM
> 15/05/14 13:00:51 INFO yarn.Client: Preparing resources for our AM container
> 15/05/14 13:00:52 INFO yarn.Client: Source and destination file systems are 
> thesame.

[jira] [Commented] (SPARK-7700) Spark 1.3.0 on YARN: Application failed 2 times due to AM Container

2015-05-18 Thread Kaveen Raajan (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14547641#comment-14547641
 ] 

Kaveen Raajan commented on SPARK-7700:
--

Hi [~srowen]

I'm sure there is no space available at my hadoop and spark path.

Since I running in windows environment I replace spark-env.sh to spark-env.cmd 
which contains  
_SET SPARK_JAR=hdfs://master:9000/user/spark/jar_
*Note:* spark jar files are moved to hdfs specified location. Also spark 
classpath are added to hadoop-config.cmd and HADOOP_CONF_DIR are set at 
enviroment variable.

And also I didn't put any JVM arg in class belongs. While seeing at 
launch-container.cmd file this line are executed
{panel}
@C:\Hadoop\bin\winutils.exe symlink "__spark__.jar" 
"\tmp\hadoop-HDFS\nm-local-dir\filecache\10\jar"
@C:\Hadoop\bin\winutils.exe symlink "__app__.jar" 
"\tmp\hadoop-HDFS\nm-local-dir\usercache\HDFS\filecache\10\spark-examples-1.3.1-hadoop2.5.2.jar"
@call %JAVA_HOME%/bin/java -server -Xmx4096m -Djava.io.tmpdir=%PWD%/tmp 
'-Dspark.executor.memory=2g' 
'-Dspark.app.name=org.apache.spark.examples.SparkPi' 
'-Dspark.master=yarn-cluster' 
-Dspark.yarn.app.container.log.dir=C:/Hadoop/logs/userlogs/application_1431924261044_0002/container_1431924261044_0002_01_01
 org.apache.spark.deploy.yarn.ApplicationMaster --class 
'org.apache.spark.examples.SparkPi' --jar 
file:/C:/Spark/lib/spark-examples-1.3.1-hadoop2.5.2.jar --arg '10' 
--executor-memory 2048m --executor-cores 1 --num-executors  3 1> 
C:/Hadoop/logs/userlogs/application_1431924261044_0002/container_1431924261044_0002_01_01/stdout
 2> 
C:/Hadoop/logs/userlogs/application_1431924261044_0002/container_1431924261044_0002_01_01/stderr
{panel}

> Spark 1.3.0 on YARN: Application failed 2 times due to AM Container
> ---
>
> Key: SPARK-7700
> URL: https://issues.apache.org/jira/browse/SPARK-7700
> Project: Spark
>  Issue Type: Story
>  Components: Build
>Affects Versions: 1.3.1
> Environment: windows 8 Single language
> Hadoop-2.5.2
> Protocol Buffer-2.5.0
> Scala-2.11
>Reporter: Kaveen Raajan
>
> I build SPARK on yarn mode by giving following command. Build got succeeded.
> {panel}
> mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -Phive -Phive-0.12.0 
> -Phive-thriftserver -DskipTests clean package
> {panel}
> I set following property at spark-env.cmd file
> {panel}
> SET SPARK_JAR=hdfs://master:9000/user/spark/jar
> {panel}
> *Note:*  spark jar files are moved to hdfs specified location. Also spark 
> classpath are added to hadoop-config.cmd and HADOOP_CONF_DIR are set at 
> enviroment variable.
> I tried to execute following SparkPi example in yarn-cluster mode.
> {panel}
> spark-submit --class org.apache.spark.examples.SparkPi --master yarn-cluster 
> --num-executors 3 --driver-memory 4g --executor-memory 2g --executor-cores 1 
> --queue default 
> S:\Hadoop\Spark\spark-1.3.1\examples\target\spark-examples_2.10-1.3.1.jar 10
> {panel}
> My job able to submit at hadoop cluster, but it always in accepted state and 
> Failed with following error
> {panel}
> 15/05/14 13:00:51 INFO client.RMProxy: Connecting to ResourceManager at 
> /0.0.0.0:8032
> 15/05/14 13:00:51 INFO yarn.Client: Requesting a new application from cluster 
> with 1 NodeManagers
> 15/05/14 13:00:51 INFO yarn.Client: Verifying our application has not 
> requestedmore than the maximum memory capability of the cluster (8048 MB per 
> container)
> 15/05/14 13:00:51 INFO yarn.Client: Will allocate AM container, with 4480 MB 
> memory including 384 MB overhead
> 15/05/14 13:00:51 INFO yarn.Client: Setting up container launch context for 
> ourAM
> 15/05/14 13:00:51 INFO yarn.Client: Preparing resources for our AM container
> 15/05/14 13:00:52 INFO yarn.Client: Source and destination file systems are 
> thesame. Not copying hdfs://master:9000/user/spark/jar
> 15/05/14 13:00:52 INFO yarn.Client: Uploading resource 
> file:/S:/Hadoop/Spark/spark-1.3.1/examples/target/spark-examples_2.10-1.3.1.jar
>  -> 
> hdfs://master:9000/user/HDFS/.sparkStaging/application_1431587916618_0003/spark-examples_2.10-1.3.1.jar
> 15/05/14 13:00:52 INFO yarn.Client: Setting up the launch environment for our 
> AM container
> 15/05/14 13:00:52 INFO spark.SecurityManager: Changing view acls to: HDFS
> 15/05/14 13:00:52 INFO spark.SecurityManager: Changing modify acls to: HDFS
> 15/05/14 13:00:52 INFO spark.SecurityManager: SecurityManager: authentication 
> disabled; ui acls disabled; users with view permissions: Set(HDFS); users 
> with modify permissions: Set(HDFS)
> 15/05/14 13:00:52 INFO yarn.Client: Submitting application 3 to 
> ResourceManager
> 15/05/14 13:00:52 INFO impl.YarnClientImpl: Submitted application 
> application_1431587916618_0003
> 15/05/14 13:00:53 INFO yarn.Client: Application report for

[jira] [Created] (SPARK-7700) Spark 1.3.0 on YARN: Application failed 2 times due to AM Container

2015-05-18 Thread Kaveen Raajan (JIRA)
Kaveen Raajan created SPARK-7700:


 Summary: Spark 1.3.0 on YARN: Application failed 2 times due to AM 
Container
 Key: SPARK-7700
 URL: https://issues.apache.org/jira/browse/SPARK-7700
 Project: Spark
  Issue Type: Story
  Components: Build
Affects Versions: 1.3.1
 Environment: windows 8 Single language
Hadoop-2.5.2
Protocol Buffer-2.5.0
Scala-2.11
Reporter: Kaveen Raajan


I build SPARK on yarn mode by giving following command. Build got succeeded.
{panel}
mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -Phive -Phive-0.12.0 
-Phive-thriftserver -DskipTests clean package
{panel}

I set following property at spark-env.cmd file
{panel}
SET SPARK_JAR=hdfs://master:9000/user/spark/jar
{panel}
*Note:*  spark jar files are moved to hdfs specified location. Also spark 
classpath are added to hadoop-config.cmd and HADOOP_CONF_DIR are set at 
enviroment variable.

I tried to execute following SparkPi example in yarn-cluster mode.
{panel}
spark-submit --class org.apache.spark.examples.SparkPi --master yarn-cluster 
--num-executors 3 --driver-memory 4g --executor-memory 2g --executor-cores 1 
--queue default 
S:\Hadoop\Spark\spark-1.3.1\examples\target\spark-examples_2.10-1.3.1.jar 10
{panel}

My job able to submit at hadoop cluster, but it always in accepted state and 
Failed with following error

{panel}
15/05/14 13:00:51 INFO client.RMProxy: Connecting to ResourceManager at 
/0.0.0.0:8032
15/05/14 13:00:51 INFO yarn.Client: Requesting a new application from cluster 
with 1 NodeManagers
15/05/14 13:00:51 INFO yarn.Client: Verifying our application has not 
requestedmore than the maximum memory capability of the cluster (8048 MB per 
container)
15/05/14 13:00:51 INFO yarn.Client: Will allocate AM container, with 4480 MB 
memory including 384 MB overhead
15/05/14 13:00:51 INFO yarn.Client: Setting up container launch context for 
ourAM
15/05/14 13:00:51 INFO yarn.Client: Preparing resources for our AM container
15/05/14 13:00:52 INFO yarn.Client: Source and destination file systems are 
thesame. Not copying hdfs://master:9000/user/spark/jar
15/05/14 13:00:52 INFO yarn.Client: Uploading resource 
file:/S:/Hadoop/Spark/spark-1.3.1/examples/target/spark-examples_2.10-1.3.1.jar 
-> 
hdfs://master:9000/user/HDFS/.sparkStaging/application_1431587916618_0003/spark-examples_2.10-1.3.1.jar
15/05/14 13:00:52 INFO yarn.Client: Setting up the launch environment for our 
AM container
15/05/14 13:00:52 INFO spark.SecurityManager: Changing view acls to: HDFS
15/05/14 13:00:52 INFO spark.SecurityManager: Changing modify acls to: HDFS
15/05/14 13:00:52 INFO spark.SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users with view permissions: Set(HDFS); users with 
modify permissions: Set(HDFS)
15/05/14 13:00:52 INFO yarn.Client: Submitting application 3 to ResourceManager
15/05/14 13:00:52 INFO impl.YarnClientImpl: Submitted application 
application_1431587916618_0003
15/05/14 13:00:53 INFO yarn.Client: Application report for 
application_1431587916618_0003 (state: ACCEPTED)
15/05/14 13:00:53 INFO yarn.Client:
 client token: N/A
 diagnostics: N/A
 ApplicationMaster host: N/A
 ApplicationMaster RPC port: -1
 queue: default
 start time: 1431588652790
 final status: UNDEFINED
 tracking URL: http://master:8088/proxy/application_1431587916618_0003/
 user: HDFS
15/05/14 13:00:54 INFO yarn.Client: Application report for 
application_1431587916618_0003 (state: ACCEPTED)
15/05/14 13:00:55 INFO yarn.Client: Application report for 
application_1431587916618_0003 (state: ACCEPTED)
15/05/14 13:00:56 INFO yarn.Client: Application report for 
application_1431587916618_0003 (state: ACCEPTED)
15/05/14 13:00:57 INFO yarn.Client: Application report for 
application_1431587916618_0003 (state: ACCEPTED)
15/05/14 13:00:58 INFO yarn.Client: Application report for 
application_1431587916618_0003 (state: ACCEPTED)
15/05/14 13:00:59 INFO yarn.Client: Application report for 
application_1431587916618_0003 (state: FAILED)
15/05/14 13:00:59 INFO yarn.Client:
 client token: N/A
 diagnostics: Application application_1431587916618_0003 failed 2 times
due to AM Container for appattempt_1431587916618_0003_02 exited with  
exitCode: 1
For more detailed output, check application tracking 
page:http://master:8088/proxy/application_1431587916618_0003/Then, click on 
links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1431587916618_0003_02_01
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContai