I am running spark-1.6.0-bin-hadoop2.6/

There is no stack trace because there is no exception; the driver just
seems to be waiting.  Here is driver DEBUG log:

16/01/11 16:31:51 INFO SparkContext: Running Spark version 1.6.0
16/01/11 16:31:51 DEBUG MutableMetricsFactory: field
org.apache.hadoop.metrics2.lib.MutableRate
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess
with annotation
@org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, about=,
always=false, type=DEFAULT, value=[Rate of successful kerberos logins and
latency (milliseconds)], valueName=Time)
16/01/11 16:31:51 DEBUG MutableMetricsFactory: field
org.apache.hadoop.metrics2.lib.MutableRate
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure
with annotation
@org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, about=,
always=false, type=DEFAULT, value=[Rate of failed kerberos logins and
latency (milliseconds)], valueName=Time)
16/01/11 16:31:51 DEBUG MetricsSystemImpl: UgiMetrics, User and group
related metrics
16/01/11 16:31:54 DEBUG KerberosName: Kerberos krb5 configuration not
found, setting default realm to empty
16/01/11 16:31:54 DEBUG Groups:  Creating new Groups object
16/01/11 16:31:54 DEBUG NativeCodeLoader: Trying to load the custom-built
native-hadoop library...
16/01/11 16:31:54 DEBUG NativeCodeLoader: Failed to load native-hadoop with
error: java.lang.UnsatisfiedLinkError: no hadoop in java.library.path
16/01/11 16:31:54 DEBUG NativeCodeLoader: java.library.path=C:\Program
Files\Java\jdk1.8.0_40\jre\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\Program
Files\NetBeans
8.0.2\java\maven\bin;C:\ProgramData\Oracle\Java\javapath;C:\Program Files
(x86)\Intel\iCLS Client\;C:\Program Files\Intel\iCLS
Client\;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program
Files\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files
(x86)\Intel\Intel(R) Management Engine Components\DAL;C:\Program
Files\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files
(x86)\Intel\Intel(R) Management Engine Components\IPT;C:\Program
Files\Lenovo\Fingerprint Manager Pro\;C:\Program Files (x86)\Common
Files\Lenovo;C:\SWTOOLS\ReadyApps;C:\Program Files (x86)\Windows
Kits\8.1\Windows Performance Toolkit\;C:\Program Files\Microsoft SQL
Server\110\Tools\Binn\;C:\Program Files (x86)\Microsoft
SDKs\TypeScript\1.0\;C:\Program Files\Microsoft SQL
Server\120\Tools\Binn\;C:\Program Files\Microsoft\Web Platform
Installer\;C:\Program Files (x86)\Common
Files\lenovo\easyplussdk\bin;C:\Program Files (x86)\Lenovo\Access
Connections\;C:\Program Files\Intel\WiFi\bin\;C:\Program Files\Common
Files\Intel\WirelessCommon\;C:\Program Files
(x86)\QuickTime\QTSystem\;C:\Program Files (x86)\Skype\Phone\;C:\Program
Files\Amazon\AWSCLI\;C:\Users\Andrew\Projects\magellan\opt\RDKit_2015_03_1.win64.java;C:\Program
Files\OpenBabel-2.3.90;C:\Program Files\Intel\WiFi\bin\;C:\Program
Files\Common Files\Intel\WirelessCommon\;.
16/01/11 16:31:54 WARN NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
16/01/11 16:31:54 DEBUG JniBasedUnixGroupsMappingWithFallback: Falling back
to shell based
16/01/11 16:31:54 DEBUG JniBasedUnixGroupsMappingWithFallback: Group
mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping
16/01/11 16:31:54 DEBUG Groups: Group mapping
impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback;
cacheTimeout=300000
16/01/11 16:31:54 DEBUG UserGroupInformation: hadoop login
16/01/11 16:31:54 DEBUG UserGroupInformation: hadoop login commit
16/01/11 16:31:54 DEBUG UserGroupInformation: using local
user:NTUserPrincipal: Andrew
16/01/11 16:31:54 DEBUG UserGroupInformation: UGI loginUser:Andrew
(auth:SIMPLE)
16/01/11 16:31:54 INFO SecurityManager: Changing view acls to: Andrew
16/01/11 16:31:54 INFO SecurityManager: Changing modify acls to: Andrew
16/01/11 16:31:54 INFO SecurityManager: SecurityManager: authentication
disabled; ui acls disabled; users with view permissions: Set(Andrew); users
with modify permissions: Set(Andrew)
16/01/11 16:31:54 DEBUG SSLOptions: No SSL protocol specified
16/01/11 16:31:54 DEBUG SSLOptions: No SSL protocol specified
16/01/11 16:31:54 DEBUG SSLOptions: No SSL protocol specified
16/01/11 16:31:54 DEBUG SecurityManager: SSLConfiguration for file server:
SSLOptions{enabled=false, keyStore=None, keyStorePassword=None,
trustStore=None, trustStorePassword=None, protocol=None,
enabledAlgorithms=Set()}
16/01/11 16:31:54 DEBUG SecurityManager: SSLConfiguration for Akka:
SSLOptions{enabled=false, keyStore=None, keyStorePassword=None,
trustStore=None, trustStorePassword=None, protocol=None,
enabledAlgorithms=Set()}
16/01/11 16:31:54 DEBUG InternalLoggerFactory: Using SLF4J as the default
logging framework
16/01/11 16:31:54 DEBUG PlatformDependent0: java.nio.Buffer.address:
available
16/01/11 16:31:54 DEBUG PlatformDependent0: sun.misc.Unsafe.theUnsafe:
available
16/01/11 16:31:54 DEBUG PlatformDependent0: sun.misc.Unsafe.copyMemory:
available
16/01/11 16:31:54 DEBUG PlatformDependent0: java.nio.Bits.unaligned: true
16/01/11 16:31:54 DEBUG PlatformDependent: Platform: Windows
16/01/11 16:31:54 DEBUG PlatformDependent: Java version: 8
16/01/11 16:31:54 DEBUG PlatformDependent: -Dio.netty.noUnsafe: false
16/01/11 16:31:54 DEBUG PlatformDependent: sun.misc.Unsafe: available
16/01/11 16:31:54 DEBUG PlatformDependent: -Dio.netty.noJavassist: false
16/01/11 16:31:54 DEBUG PlatformDependent: Javassist: unavailable
16/01/11 16:31:54 DEBUG PlatformDependent: You don't have Javassist in your
class path or you don't have enough permission to load dynamically
generated classes.  Please check the configuration for better performance.
16/01/11 16:31:54 DEBUG PlatformDependent: -Dio.netty.tmpdir:
C:\Users\Andrew\AppData\Local\Temp (java.io.tmpdir)
16/01/11 16:31:54 DEBUG PlatformDependent: -Dio.netty.bitMode: 64
(sun.arch.data.model)
16/01/11 16:31:54 DEBUG PlatformDependent: -Dio.netty.noPreferDirect: false
16/01/11 16:31:55 DEBUG MultithreadEventLoopGroup:
-Dio.netty.eventLoopThreads: 8
16/01/11 16:31:55 DEBUG NioEventLoop: -Dio.netty.noKeySetOptimization: false
16/01/11 16:31:55 DEBUG NioEventLoop:
-Dio.netty.selectorAutoRebuildThreshold: 512
16/01/11 16:31:55 DEBUG PooledByteBufAllocator:
-Dio.netty.allocator.numHeapArenas: 8
16/01/11 16:31:55 DEBUG PooledByteBufAllocator:
-Dio.netty.allocator.numDirectArenas: 8
16/01/11 16:31:55 DEBUG PooledByteBufAllocator:
-Dio.netty.allocator.pageSize: 8192
16/01/11 16:31:55 DEBUG PooledByteBufAllocator:
-Dio.netty.allocator.maxOrder: 11
16/01/11 16:31:55 DEBUG PooledByteBufAllocator:
-Dio.netty.allocator.chunkSize: 16777216
16/01/11 16:31:55 DEBUG PooledByteBufAllocator:
-Dio.netty.allocator.tinyCacheSize: 512
16/01/11 16:31:55 DEBUG PooledByteBufAllocator:
-Dio.netty.allocator.smallCacheSize: 256
16/01/11 16:31:55 DEBUG PooledByteBufAllocator:
-Dio.netty.allocator.normalCacheSize: 64
16/01/11 16:31:55 DEBUG PooledByteBufAllocator:
-Dio.netty.allocator.maxCachedBufferCapacity: 32768
16/01/11 16:31:55 DEBUG PooledByteBufAllocator:
-Dio.netty.allocator.cacheTrimInterval: 8192
16/01/11 16:31:55 DEBUG ThreadLocalRandom:
-Dio.netty.initialSeedUniquifier: 0xbdcb4211c6cd3298 (took 1 ms)
16/01/11 16:31:55 DEBUG ByteBufUtil: -Dio.netty.allocator.type: unpooled
16/01/11 16:31:55 DEBUG ByteBufUtil:
-Dio.netty.threadLocalDirectBufferSize: 65536
16/01/11 16:31:55 DEBUG NetUtil: Loopback interface: lo (Software Loopback
Interface 1, 127.0.0.1)
16/01/11 16:31:55 DEBUG NetUtil: \proc\sys\net\core\somaxconn: 200
(non-existent)
16/01/11 16:31:55 DEBUG TransportServer: Shuffle server started on port
:64331
16/01/11 16:31:55 INFO Utils: Successfully started service 'sparkDriver' on
port 64331.
16/01/11 16:31:55 DEBUG AkkaUtils: In createActorSystem, requireCookie is:
off
16/01/11 16:31:55 INFO Slf4jLogger: Slf4jLogger started
16/01/11 16:31:55 INFO Remoting: Starting remoting
16/01/11 16:31:56 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://sparkDriverActorSystem@192.168.1.139:64349]
16/01/11 16:31:56 INFO Utils: Successfully started service
'sparkDriverActorSystem' on port 64349.
16/01/11 16:31:56 DEBUG SparkEnv: Using serializer: class
org.apache.spark.serializer.JavaSerializer
16/01/11 16:31:56 INFO SparkEnv: Registering MapOutputTracker
16/01/11 16:31:56 INFO SparkEnv: Registering BlockManagerMaster
16/01/11 16:31:56 INFO DiskBlockManager: Created local directory at
C:\Users\Andrew\AppData\Local\Temp\blockmgr-dd965c69-6e07-48e3-b4fa-19e8bf94176d
16/01/11 16:31:56 INFO MemoryStore: MemoryStore started with capacity 405.8
MB
16/01/11 16:31:56 INFO SparkEnv: Registering OutputCommitCoordinator
16/01/11 16:31:56 INFO Utils: Successfully started service 'SparkUI' on
port 4040.
16/01/11 16:31:56 INFO SparkUI: Started SparkUI at http://192.168.1.139:4040
16/01/11 16:31:56 INFO HttpFileServer: HTTP File server directory is
C:\Users\Andrew\AppData\Local\Temp\spark-59fbdec0-d841-4058-a195-a788bd08bf4a\httpd-07a33838-416e-48fa-b2ed-78efa0dcfab0
16/01/11 16:31:56 INFO HttpServer: Starting HTTP Server
16/01/11 16:31:56 DEBUG HttpServer: HttpServer is not using security
16/01/11 16:31:56 INFO Utils: Successfully started service 'HTTP file
server' on port 64352.
16/01/11 16:31:56 DEBUG HttpFileServer: HTTP file server started at:
http://192.168.1.139:64352
16/01/11 16:31:56 INFO SparkContext: Added JAR
target/magellan-spark-1.0-SNAPSHOT.jar at
http://192.168.1.139:64352/jars/magellan-spark-1.0-SNAPSHOT.jar with
timestamp 1452547916755
16/01/11 16:31:56 INFO AppClient$ClientEndpoint: Connecting to master
spark://ec2-52-90-108-213.compute-1.amazonaws.com:11407...
16/01/11 16:31:57 DEBUG TransportClientFactory: Creating new connection to
ec2-52-90-108-213.compute-1.amazonaws.com/52.90.108.213:11407
16/01/11 16:31:57 DEBUG ResourceLeakDetector:
-Dio.netty.leakDetectionLevel: simple
16/01/11 16:31:57 DEBUG TransportClientFactory: Connection to
ec2-52-90-108-213.compute-1.amazonaws.com/52.90.108.213:11407 successful,
running bootstraps...
16/01/11 16:31:57 DEBUG TransportClientFactory: Successfully created
connection to ec2-52-90-108-213.compute-1.amazonaws.com/52.90.108.213:11407
after 85 ms (0 ms spent in bootstraps)
16/01/11 16:31:57 DEBUG Recycler: -Dio.netty.recycler.maxCapacity.default:
262144
16/01/11 16:31:57 INFO SparkDeploySchedulerBackend: Connected to Spark
cluster with app ID app-20160111213238-0016
16/01/11 16:31:57 INFO AppClient$ClientEndpoint: Executor added:
app-20160111213238-0016/0 on worker-20160111165011-172.31.9.200-52446 (
172.31.9.200:52446) with 2 cores
16/01/11 16:31:57 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20160111213238-0016/0 on hostPort 172.31.9.200:52446 with 2 cores,
1024.0 MB RAM
16/01/11 16:31:57 INFO AppClient$ClientEndpoint: Executor updated:
app-20160111213238-0016/0 is now RUNNING
16/01/11 16:31:57 DEBUG TransportServer: Shuffle server started on port
:64370
16/01/11 16:31:57 INFO Utils: Successfully started service
'org.apache.spark.network.netty.NettyBlockTransferService' on port 64370.
16/01/11 16:31:57 INFO NettyBlockTransferService: Server created on 64370
16/01/11 16:31:57 INFO BlockManagerMaster: Trying to register BlockManager
16/01/11 16:31:57 INFO BlockManagerMasterEndpoint: Registering block
manager 192.168.1.139:64370 with 405.8 MB RAM, BlockManagerId(driver,
192.168.1.139, 64370)
16/01/11 16:31:57 INFO BlockManagerMaster: Registered BlockManager
16/01/11 16:31:57 INFO SparkDeploySchedulerBackend: SchedulerBackend is
ready for scheduling beginning after reached minRegisteredResourcesRatio:
0.0
16/01/11 16:31:58 DEBUG ClosureCleaner: +++ Cleaning closure <function2>
(org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction2$1) +++
16/01/11 16:31:58 DEBUG ClosureCleaner:  + declared fields: 2
16/01/11 16:31:58 DEBUG ClosureCleaner:      public static final long
org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction2$1.serialVersionUID
16/01/11 16:31:58 DEBUG ClosureCleaner:      private final
org.apache.spark.api.java.function.Function2
org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction2$1.fun$2
16/01/11 16:31:58 DEBUG ClosureCleaner:  + declared methods: 1
16/01/11 16:31:58 DEBUG ClosureCleaner:      public final java.lang.Object
org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction2$1.apply(java.lang.Object,java.lang.Object)
16/01/11 16:31:58 DEBUG ClosureCleaner:  + inner classes: 0
16/01/11 16:31:58 DEBUG ClosureCleaner:  + outer classes: 0
16/01/11 16:31:58 DEBUG ClosureCleaner:  + outer objects: 0
16/01/11 16:31:58 DEBUG ClosureCleaner:  + populating accessed fields
because this is the starting closure
16/01/11 16:31:58 DEBUG ClosureCleaner:  + fields accessed by starting
closure: 0
16/01/11 16:31:58 DEBUG ClosureCleaner:  + there are no enclosing objects!
16/01/11 16:31:58 DEBUG ClosureCleaner:  +++ closure <function2>
(org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction2$1) is now
cleaned +++
16/01/11 16:31:58 DEBUG ClosureCleaner: +++ Cleaning closure <function2>
(org.apache.spark.SparkContext$$anonfun$36) +++
16/01/11 16:31:58 DEBUG ClosureCleaner:  + declared fields: 2
16/01/11 16:31:58 DEBUG ClosureCleaner:      public static final long
org.apache.spark.SparkContext$$anonfun$36.serialVersionUID
16/01/11 16:31:58 DEBUG ClosureCleaner:      private final scala.Function1
org.apache.spark.SparkContext$$anonfun$36.processPartition$1
16/01/11 16:31:58 DEBUG ClosureCleaner:  + declared methods: 2
16/01/11 16:31:58 DEBUG ClosureCleaner:      public final java.lang.Object
org.apache.spark.SparkContext$$anonfun$36.apply(java.lang.Object,java.lang.Object)
16/01/11 16:31:58 DEBUG ClosureCleaner:      public final java.lang.Object
org.apache.spark.SparkContext$$anonfun$36.apply(org.apache.spark.TaskContext,scala.collection.Iterator)
16/01/11 16:31:58 DEBUG ClosureCleaner:  + inner classes: 0
16/01/11 16:31:58 DEBUG ClosureCleaner:  + outer classes: 0
16/01/11 16:31:58 DEBUG ClosureCleaner:  + outer objects: 0
16/01/11 16:31:58 DEBUG ClosureCleaner:  + populating accessed fields
because this is the starting closure
16/01/11 16:31:58 DEBUG ClosureCleaner:  + fields accessed by starting
closure: 0
16/01/11 16:31:58 DEBUG ClosureCleaner:  + there are no enclosing objects!
16/01/11 16:31:58 DEBUG ClosureCleaner:  +++ closure <function2>
(org.apache.spark.SparkContext$$anonfun$36) is now cleaned +++
16/01/11 16:31:58 INFO SparkContext: Starting job: reduce at
SimpleSpark.java:19
16/01/11 16:31:58 INFO DAGScheduler: Got job 0 (reduce at
SimpleSpark.java:19) with 2 output partitions
16/01/11 16:31:58 INFO DAGScheduler: Final stage: ResultStage 0 (reduce at
SimpleSpark.java:19)
16/01/11 16:31:58 INFO DAGScheduler: Parents of final stage: List()
16/01/11 16:31:58 INFO DAGScheduler: Missing parents: List()
16/01/11 16:31:58 DEBUG DAGScheduler: submitStage(ResultStage 0)
16/01/11 16:31:58 DEBUG DAGScheduler: missing: List()
16/01/11 16:31:58 INFO DAGScheduler: Submitting ResultStage 0
(ParallelCollectionRDD[0] at parallelize at SimpleSpark.java:18), which has
no missing parents
16/01/11 16:31:58 DEBUG DAGScheduler: submitMissingTasks(ResultStage 0)
16/01/11 16:31:58 INFO MemoryStore: Block broadcast_0 stored as values in
memory (estimated size 1536.0 B, free 1536.0 B)
16/01/11 16:31:58 DEBUG BlockManager: Put block broadcast_0 locally took
 109 ms
16/01/11 16:31:58 DEBUG BlockManager: Putting block broadcast_0 without
replication took  109 ms
16/01/11 16:31:58 INFO MemoryStore: Block broadcast_0_piece0 stored as
bytes in memory (estimated size 1036.0 B, free 2.5 KB)
16/01/11 16:31:58 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory
on 192.168.1.139:64370 (size: 1036.0 B, free: 405.7 MB)
16/01/11 16:31:58 DEBUG BlockManagerMaster: Updated info of block
broadcast_0_piece0
16/01/11 16:31:58 DEBUG BlockManager: Told master about block
broadcast_0_piece0
16/01/11 16:31:58 DEBUG BlockManager: Put block broadcast_0_piece0 locally
took  0 ms
16/01/11 16:31:58 DEBUG BlockManager: Putting block broadcast_0_piece0
without replication took  0 ms
16/01/11 16:31:58 INFO SparkContext: Created broadcast 0 from broadcast at
DAGScheduler.scala:1006
16/01/11 16:31:58 INFO DAGScheduler: Submitting 2 missing tasks from
ResultStage 0 (ParallelCollectionRDD[0] at parallelize at
SimpleSpark.java:18)
16/01/11 16:31:58 DEBUG DAGScheduler: New pending partitions: Set(0, 1)
16/01/11 16:31:58 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
16/01/11 16:31:58 DEBUG TaskSetManager: Epoch for TaskSet 0.0: 0
16/01/11 16:31:58 DEBUG TaskSetManager: Valid locality levels for TaskSet
0.0: NO_PREF, ANY
16/01/11 16:31:58 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:31:58 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:31:59 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:00 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:01 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:02 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:03 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:04 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:05 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:06 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:07 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:08 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:09 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:10 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:11 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:12 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:13 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient resources
16/01/11 16:32:13 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:14 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:15 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:16 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:17 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:18 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:19 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:20 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:21 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:22 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:23 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:24 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:25 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:26 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:27 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:28 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient resources
16/01/11 16:32:28 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:29 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0
16/01/11 16:32:30 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0,
runningTasks: 0


On Mon, Jan 11, 2016 at 4:12 PM, Ted Yu <yuzhih...@gmail.com> wrote:

> Which release of Spark are you using ?
>
> Can you pastebin stack trace of executor(s) so that we can have some more
> clue ?
>
> Thanks
>
> On Mon, Jan 11, 2016 at 1:10 PM, Andrew Wooster <
> andrew.w.woos...@gmail.com> wrote:
>
>> I have a very simple program that runs fine on my Linux server that runs
>> Spark master and worker in standalone mode.
>>
>> public class SimpleSpark {
>>     public int sum () {
>>         SparkConf conf = new SparkConf()
>>                 .setAppName("Magellan")
>>                 .setMaster("spark://
>> ec2-nnn-nnn-nnn-nnn.compute-1.amazonaws.com:11407")
>>                 .setJars(new String[]
>> {"target/magellan-spark-1.0-SNAPSHOT.jar"});
>>         JavaSparkContext sc = new JavaSparkContext(conf);
>>
>>         List<Integer> data = Arrays.asList(1, 2, 3, 4, 5);
>>         JavaRDD<Integer> distData = sc.parallelize(data);
>>         int total = distData.reduce(new SumFunc());
>>         return total;
>>    }
>>
>>     public static class SumFunc implements Function2<Integer, Integer,
>> Integer> {
>>         public Integer call(Integer a, Integer b) {
>>             return a + b;
>>         }
>>     };
>>
>> However, when I run the same driver from a Windows machine it outputs the
>> following message and never completes:
>>   16/01/11 20:51:11 WARN TaskSchedulerImpl: Initial job has not accepted
>> any resources; check your cluster UI to ensure that workers are registered
>> and have sufficient resources
>>
>> I have checked the cluster UI and the job is marked as RUNNING (so it
>> does not appears to be waiting on a worker).  I do not see anything out of
>> the ordinary in the master and worker logs.
>>
>> How do I debug a problem like this?
>> -Andrew
>>
>
>

Reply via email to