Sam Stoelinga created SPARK-12254:
-------------------------------------

             Summary: Spark with Tahchyon and Swift as underFS executor gets 
killed for unkown reason
                 Key: SPARK-12254
                 URL: https://issues.apache.org/jira/browse/SPARK-12254
             Project: Spark
          Issue Type: Bug
         Environment: Spark 1.6.0-SNAPSHOT
            Reporter: Sam Stoelinga


My Python executors get killed for an unkown reason. I'm running the following 
Spark application: 
https://github.com/samos123/computer-vision-cloud-platform/blob/master/cloudvision_spark/feature_extraction.py
 with as input a sequencefile which consists of String, Bytes where string is 
the filename and Bytes are the raw image bytes. 

spark-executor log
I1210 03:21:34.962113 26622 fetcher.cpp:409] Fetcher Info: 
{"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/20151107-090736-913287360-5050-8036-S0\/root","items":[{"action":"BYPASS_CACHE","uri":{"extract":true,"value":"file:\/\/\/srv\/spark-1.6.0-SNAPSHOT-bin-spark-master-tachyon-0.8.0.tgz"}}],"sandbox_directory":"\/var\/mesos\/slaves\/20151107-090736-913287360-5050-8036-S0\/frameworks\/20151201-132431-913287360-5050-28876-0003\/executors\/20151107-090736-913287360-5050-8036-S0\/runs\/c6d0acca-7ab3-4d65-bf12-85ce68f14750","user":"root"}
I1210 03:21:34.964156 26622 fetcher.cpp:364] Fetching URI 
'file:///srv/spark-1.6.0-SNAPSHOT-bin-spark-master-tachyon-0.8.0.tgz'
I1210 03:21:34.964177 26622 fetcher.cpp:238] Fetching directly into the sandbox 
directory
I1210 03:21:34.964210 26622 fetcher.cpp:176] Fetching URI 
'file:///srv/spark-1.6.0-SNAPSHOT-bin-spark-master-tachyon-0.8.0.tgz'
I1210 03:21:34.964236 26622 fetcher.cpp:159] Copying resource with command:cp 
'/srv/spark-1.6.0-SNAPSHOT-bin-spark-master-tachyon-0.8.0.tgz' 
'/var/mesos/slaves/20151107-090736-913287360-5050-8036-S0/frameworks/20151201-132431-913287360-5050-28876-0003/executors/20151107-090736-913287360-5050-8036-S0/runs/c6d0acca-7ab3-4d65-bf12-85ce68f14750/spark-1.6.0-SNAPSHOT-bin-spark-master-tachyon-0.8.0.tgz'
I1210 03:22:13.515492 26622 fetcher.cpp:76] Extracting with command: tar -C 
'/var/mesos/slaves/20151107-090736-913287360-5050-8036-S0/frameworks/20151201-132431-913287360-5050-28876-0003/executors/20151107-090736-913287360-5050-8036-S0/runs/c6d0acca-7ab3-4d65-bf12-85ce68f14750'
 -xf 
'/var/mesos/slaves/20151107-090736-913287360-5050-8036-S0/frameworks/20151201-132431-913287360-5050-28876-0003/executors/20151107-090736-913287360-5050-8036-S0/runs/c6d0acca-7ab3-4d65-bf12-85ce68f14750/spark-1.6.0-SNAPSHOT-bin-spark-master-tachyon-0.8.0.tgz'
I1210 03:23:22.353385 26622 fetcher.cpp:84] Extracted 
'/var/mesos/slaves/20151107-090736-913287360-5050-8036-S0/frameworks/20151201-132431-913287360-5050-28876-0003/executors/20151107-090736-913287360-5050-8036-S0/runs/c6d0acca-7ab3-4d65-bf12-85ce68f14750/spark-1.6.0-SNAPSHOT-bin-spark-master-tachyon-0.8.0.tgz'
 into 
'/var/mesos/slaves/20151107-090736-913287360-5050-8036-S0/frameworks/20151201-132431-913287360-5050-28876-0003/executors/20151107-090736-913287360-5050-8036-S0/runs/c6d0acca-7ab3-4d65-bf12-85ce68f14750'
W1210 03:23:22.353529 26622 fetcher.cpp:260] Copying instead of extracting 
resource from URI with 'extract' flag, because it does not seem to be an 
archive: file:///srv/spark-1.6.0-SNAPSHOT-bin-spark-master-tachyon-0.8.0.tgz
I1210 03:23:22.353559 26622 fetcher.cpp:441] Fetched 
'file:///srv/spark-1.6.0-SNAPSHOT-bin-spark-master-tachyon-0.8.0.tgz' to 
'/var/mesos/slaves/20151107-090736-913287360-5050-8036-S0/frameworks/20151201-132431-913287360-5050-28876-0003/executors/20151107-090736-913287360-5050-8036-S0/runs/c6d0acca-7ab3-4d65-bf12-85ce68f14750/spark-1.6.0-SNAPSHOT-bin-spark-master-tachyon-0.8.0.tgz'
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/12/10 03:23:24 INFO MesosExecutorBackend: Registered signal handlers for 
[TERM, HUP, INT]
I1210 03:23:25.750306 26668 exec.cpp:132] Version: 0.23.0
I1210 03:23:25.882653 26682 exec.cpp:206] Executor registered on slave 
20151107-090736-913287360-5050-8036-S0
15/12/10 03:23:25 INFO MesosExecutorBackend: Registered with Mesos as executor 
ID 20151107-090736-913287360-5050-8036-S0 with 1 cpus
15/12/10 03:23:26 INFO SecurityManager: Changing view acls to: root
15/12/10 03:23:26 INFO SecurityManager: Changing modify acls to: root
15/12/10 03:23:26 INFO SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users with view permissions: Set(root); users with 
modify permissions: Set(root)
15/12/10 03:23:28 INFO Slf4jLogger: Slf4jLogger started
15/12/10 03:23:28 INFO Remoting: Starting remoting
15/12/10 03:23:28 INFO Remoting: Remoting started; listening on addresses 
:[akka.tcp://sparkExecutor@mesos-slave-1:34263]
15/12/10 03:23:28 INFO Utils: Successfully started service 'sparkExecutor' on 
port 34263.
15/12/10 03:23:32 INFO DiskBlockManager: Created local directory at 
/var/mesos/slaves/20151107-090736-913287360-5050-8036-S0/frameworks/20151201-132431-913287360-5050-28876-0003/executors/20151107-090736-913287360-5050-8036-S0/runs/c6d0acca-7ab3-4d65-bf12-85ce68f14750/blockmgr-f72edf85-0bb2-4125-a70c-5a620a566c08
15/12/10 03:23:32 INFO MemoryStore: MemoryStore started with capacity 1472.3 MB
15/12/10 03:23:34 WARN NativeCodeLoader: Unable to load native-hadoop library 
for your platform... using builtin-java classes where applicable
15/12/10 03:23:37 INFO Executor: Starting executor ID 
20151107-090736-913287360-5050-8036-S0 on host mesos-slave-1
15/12/10 03:23:37 INFO Utils: Successfully started service 
'org.apache.spark.network.netty.NettyBlockTransferService' on port 60277.
15/12/10 03:23:37 INFO NettyBlockTransferService: Server created on 60277
15/12/10 03:23:37 INFO BlockManagerMaster: Trying to register BlockManager
15/12/10 03:23:37 INFO BlockManagerMaster: Registered BlockManager
15/12/10 03:23:37 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
15/12/10 03:23:38 INFO Executor: Fetching 
http://192.168.111.54:48803/files/utils.py with timestamp 1449717675253
15/12/10 03:23:38 INFO Utils: Fetching 
http://192.168.111.54:48803/files/utils.py to 
/var/mesos/slaves/20151107-090736-913287360-5050-8036-S0/frameworks/20151201-132431-913287360-5050-28876-0003/executors/20151107-090736-913287360-5050-8036-S0/runs/c6d0acca-7ab3-4d65-bf12-85ce68f14750/spark-9ba2e063-ea69-4734-8e5f-0b0e5a557f2a/fetchFileTemp8543312081824126029.tmp
15/12/10 03:23:38 INFO Utils: Copying 
/var/mesos/slaves/20151107-090736-913287360-5050-8036-S0/frameworks/20151201-132431-913287360-5050-28876-0003/executors/20151107-090736-913287360-5050-8036-S0/runs/c6d0acca-7ab3-4d65-bf12-85ce68f14750/spark-9ba2e063-ea69-4734-8e5f-0b0e5a557f2a/-3259379171449717675253_cache
 to 
/var/mesos/slaves/20151107-090736-913287360-5050-8036-S0/frameworks/20151201-132431-913287360-5050-28876-0003/executors/20151107-090736-913287360-5050-8036-S0/runs/c6d0acca-7ab3-4d65-bf12-85ce68f14750/spark-1.6.0-SNAPSHOT-bin-spark-master-tachyon-0.8.0/./utils.py
15/12/10 03:23:39 INFO Executor: Fetching 
http://192.168.111.54:48803/files/feature_extraction.py with timestamp 
1449717675235
15/12/10 03:23:39 INFO Utils: Fetching 
http://192.168.111.54:48803/files/feature_extraction.py to 
/var/mesos/slaves/20151107-090736-913287360-5050-8036-S0/frameworks/20151201-132431-913287360-5050-28876-0003/executors/20151107-090736-913287360-5050-8036-S0/runs/c6d0acca-7ab3-4d65-bf12-85ce68f14750/spark-9ba2e063-ea69-4734-8e5f-0b0e5a557f2a/fetchFileTemp6515687227072030355.tmp
15/12/10 03:23:39 INFO Utils: Copying 
/var/mesos/slaves/20151107-090736-913287360-5050-8036-S0/frameworks/20151201-132431-913287360-5050-28876-0003/executors/20151107-090736-913287360-5050-8036-S0/runs/c6d0acca-7ab3-4d65-bf12-85ce68f14750/spark-9ba2e063-ea69-4734-8e5f-0b0e5a557f2a/19571073061449717675235_cache
 to 
/var/mesos/slaves/20151107-090736-913287360-5050-8036-S0/frameworks/20151201-132431-913287360-5050-28876-0003/executors/20151107-090736-913287360-5050-8036-S0/runs/c6d0acca-7ab3-4d65-bf12-85ce68f14750/spark-1.6.0-SNAPSHOT-bin-spark-master-tachyon-0.8.0/./feature_extraction.py
15/12/10 03:23:41 INFO TorrentBroadcast: Started reading broadcast variable 2
15/12/10 03:23:49 INFO MemoryStore: Ensuring 1974 bytes of free space for block 
broadcast_2_piece0(free: 1543766016, max: 1543766016)
15/12/10 03:23:49 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in 
memory (estimated size 1974.0 B, free 1974.0 B)
15/12/10 03:23:49 INFO TorrentBroadcast: Reading broadcast variable 2 took 8260 
ms
15/12/10 03:23:50 INFO MemoryStore: Ensuring 1048576 bytes of free space for 
block broadcast_2(free: 1543764042, max: 1543766016)
15/12/10 03:23:50 INFO MemoryStore: Ensuring 3376 bytes of free space for block 
broadcast_2(free: 1543764042, max: 1543766016)
15/12/10 03:23:50 INFO MemoryStore: Block broadcast_2 stored as values in 
memory (estimated size 3.3 KB, free 5.2 KB)
15/12/10 03:23:51 INFO HadoopRDD: Input split: 
tachyon://mesos-master-1:19998/caltech-256.hseq:0+33554432
15/12/10 03:23:51 INFO TorrentBroadcast: Started reading broadcast variable 0
15/12/10 03:23:51 INFO MemoryStore: Ensuring 20526 bytes of free space for 
block broadcast_0_piece0(free: 1543760666, max: 1543766016)
15/12/10 03:23:51 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in 
memory (estimated size 20.0 KB, free 25.3 KB)
15/12/10 03:23:51 INFO TorrentBroadcast: Reading broadcast variable 0 took 51 ms
15/12/10 03:23:53 INFO MemoryStore: Ensuring 1048576 bytes of free space for 
block broadcast_0(free: 1543740140, max: 1543766016)
15/12/10 03:24:05 INFO MemoryStore: Ensuring 298752 bytes of free space for 
block broadcast_0(free: 1543740140, max: 1543766016)
15/12/10 03:24:05 INFO MemoryStore: Block broadcast_0 stored as values in 
memory (estimated size 291.8 KB, free 317.0 KB)
15/12/10 03:24:05 INFO deprecation: mapred.tip.id is deprecated. Instead, use 
mapreduce.task.id
15/12/10 03:24:05 INFO deprecation: mapred.task.id is deprecated. Instead, use 
mapreduce.task.attempt.id
15/12/10 03:24:05 INFO deprecation: mapred.task.is.map is deprecated. Instead, 
use mapreduce.task.ismap
15/12/10 03:24:05 INFO deprecation: mapred.task.partition is deprecated. 
Instead, use mapreduce.task.partition
15/12/10 03:24:05 INFO deprecation: mapred.job.id is deprecated. Instead, use 
mapreduce.job.id
15/12/10 03:24:05 INFO : 
initialize(tachyon://mesos-master-1:19998/caltech-256.hseq, Configuration: ). 
Connecting to Tachyon: tachyon://mesos-master-1:19998/caltech-256.hseq
15/12/10 03:24:05 INFO : Loading Tachyon properties from Hadoop configuration: 
{}
15/12/10 03:24:06 INFO : Tachyon client (version 0.8.0) is trying to connect 
with BlockMaster master @ mesos-master-1/192.168.111.54:19998
15/12/10 03:24:06 INFO : Client registered with BlockMaster master @ 
mesos-master-1/192.168.111.54:19998
15/12/10 03:24:06 INFO : Tachyon client (version 0.8.0) is trying to connect 
with FileSystemMaster master @ mesos-master-1/192.168.111.54:19998
15/12/10 03:24:06 INFO : Client registered with FileSystemMaster master @ 
mesos-master-1/192.168.111.54:19998
15/12/10 03:24:06 INFO : tachyon://mesos-master-1:19998 
tachyon://mesos-master-1:19998 swift://spark.swift1
15/12/10 03:24:06 INFO : getWorkingDirectory: /
15/12/10 03:24:06 INFO : 
getFileStatus(tachyon://mesos-master-1:19998/caltech-256.hseq): HDFS Path: 
swift://spark.swift1/caltech-256.hseq TPath: 
tachyon://mesos-master-1:19998/caltech-256.hseq
15/12/10 03:24:06 INFO : open(tachyon://mesos-master-1:19998/caltech-256.hseq, 
65536)
15/12/10 03:24:06 INFO : Tachyon client (version 0.8.0) is trying to connect 
with FileSystemMaster master @ mesos-master-1/192.168.111.54:19998
15/12/10 03:24:06 INFO : Client registered with FileSystemMaster master @ 
mesos-master-1/192.168.111.54:19998
15/12/10 03:24:09 INFO : Connecting local worker @ 
mesos-slave-1/192.168.111.57:29998
15/12/10 03:24:09 INFO : Folder /mnt/ramdisk/tachyonworker/8104385876190351497 
was created!
15/12/10 03:24:09 INFO : LocalBlockOutStream created new file block, block 
path: /mnt/ramdisk/tachyonworker/8104385876190351497/16777216
15/12/10 03:24:09 INFO CodecPool: Got brand-new decompressor [.deflate]
15/12/10 03:24:09 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 18725 
bytes result sent to driver
15/12/10 03:24:10 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)
15/12/10 03:24:10 INFO TorrentBroadcast: Started reading broadcast variable 3
15/12/10 03:24:10 INFO MemoryStore: Ensuring 26650 bytes of free space for 
block broadcast_3_piece0(free: 1543441388, max: 1543766016)
15/12/10 03:24:10 INFO MemoryStore: Block broadcast_3_piece0 stored as bytes in 
memory (estimated size 26.0 KB, free 343.0 KB)
15/12/10 03:24:10 INFO TorrentBroadcast: Reading broadcast variable 3 took 29 ms
15/12/10 03:24:10 INFO MemoryStore: Ensuring 1048576 bytes of free space for 
block broadcast_3(free: 1543414738, max: 1543766016)
15/12/10 03:24:10 INFO MemoryStore: Ensuring 72408 bytes of free space for 
block broadcast_3(free: 1543414738, max: 1543766016)
15/12/10 03:24:10 INFO MemoryStore: Block broadcast_3 stored as values in 
memory (estimated size 70.7 KB, free 413.8 KB)
15/12/10 03:24:10 INFO HadoopRDD: Input split: 
tachyon://mesos-master-1:19998/caltech-256.hseq:0+33554432
15/12/10 03:24:10 INFO : getWorkingDirectory: /
15/12/10 03:24:10 INFO : 
getFileStatus(tachyon://mesos-master-1:19998/caltech-256.hseq): HDFS Path: 
swift://spark.swift1/caltech-256.hseq TPath: 
tachyon://mesos-master-1:19998/caltech-256.hseq
15/12/10 03:24:10 INFO : open(tachyon://mesos-master-1:19998/caltech-256.hseq, 
65536)
15/12/10 03:24:11 INFO : Folder /mnt/ramdisk/tachyonworker/6986283206757667884 
was created!
15/12/10 03:24:11 INFO : LocalBlockOutStream created new file block, block 
path: /mnt/ramdisk/tachyonworker/6986283206757667884/16777216
15/12/10 03:24:13 INFO Executor: Executor is trying to kill task 0.0 in stage 
1.0 (TID 1)
I1210 03:24:13.472942 26685 exec.cpp:379] Executor asked to shutdown
15/12/10 03:24:13 WARN PythonRunner: Incomplete task interrupted: Attempting to 
kill Python Worker
15/12/10 03:24:13 INFO Executor: Executor killed task 0.0 in stage 1.0 (TID 1)
15/12/10 03:24:16 ERROR TransportClient: Failed to send RPC 8514268406464501772 
to mesos-master-1/192.168.111.54:56676: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
15/12/10 03:24:16 WARN NettyRpcEndpointRef: Error sending message [message = 
Heartbeat(20151107-090736-913287360-5050-8036-S0,[Lscala.Tuple2;@14633d49,BlockManagerId(20151107-090736-913287360-5050-8036-S0,
 mesos-slave-1, 60277))] in 1 attempts
java.io.IOException: Failed to send RPC 8514268406464501772 to 
mesos-master-1/192.168.111.54:56676: java.nio.channels.ClosedChannelException
        at 
org.apache.spark.network.client.TransportClient$3.operationComplete(TransportClient.java:229)
        at 
org.apache.spark.network.client.TransportClient$3.operationComplete(TransportClient.java:216)
        at 
io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
        at 
io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:567)
        at 
io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424)
        at 
io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:801)
        at 
io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:699)
        at 
io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1122)
        at 
io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:633)
        at 
io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:32)
        at 
io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:908)
        at 
io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:960)
        at 
io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:893)
        at 
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
        at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.nio.channels.ClosedChannelException



Spark-driver log
/srv/spark-1.6.0-SNAPSHOT-bin-spark-master-tachyon-0.8.0/bin/spark-submit 
--executor-memory 2g --py-files utils.py feature_extraction.py surf 
tachyon://mesos-master-1:19998/caltech-256.hseq 
tachyon://mesos-master-1:19998/caltech-256-surf.pickle
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/12/10 03:21:12 INFO SparkContext: Running Spark version 1.6.0-SNAPSHOT
15/12/10 03:21:12 WARN NativeCodeLoader: Unable to load native-hadoop library 
for your platform... using builtin-java classes where applicable
15/12/10 03:21:12 INFO SecurityManager: Changing view acls to: root
15/12/10 03:21:12 INFO SecurityManager: Changing modify acls to: root
15/12/10 03:21:12 INFO SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users with view permissions: Set(root); users with 
modify permissions: Set(root)
15/12/10 03:21:13 INFO Utils: Successfully started service 'NettyRpcEnv' on 
port 0.
15/12/10 03:21:14 INFO Slf4jLogger: Slf4jLogger started
15/12/10 03:21:14 INFO Remoting: Starting remoting
15/12/10 03:21:14 INFO Remoting: Remoting started; listening on addresses 
:[akka.tcp://[email protected]:49812]
15/12/10 03:21:14 INFO Utils: Successfully started service 'sparkDriver' on 
port 49812.
15/12/10 03:21:14 INFO SparkEnv: Registering MapOutputTracker
15/12/10 03:21:14 INFO SparkEnv: Registering BlockManagerMaster
15/12/10 03:21:14 INFO DiskBlockManager: Created local directory at 
/tmp/blockmgr-dc7c2a3a-b5e5-417d-9db3-b77697349c0b
15/12/10 03:21:14 INFO MemoryStore: MemoryStore started with capacity 736.1 MB
15/12/10 03:21:14 INFO HttpFileServer: HTTP File server directory is 
/tmp/spark-f0e72af0-2e43-4b63-8943-21e82ce88e5d/httpd-3fd66895-2583-4df5-b254-784067b5fe14
15/12/10 03:21:14 INFO HttpServer: Starting HTTP Server
15/12/10 03:21:14 INFO Utils: Successfully started service 'HTTP file server' 
on port 48803.
15/12/10 03:21:14 INFO Utils: Successfully started service 'HTTP file server' 
on port 48803.
15/12/10 03:21:14 INFO SparkEnv: Registering OutputCommitCoordinator
15/12/10 03:21:15 INFO Utils: Successfully started service 'SparkUI' on port 
4040.
15/12/10 03:21:15 INFO SparkUI: Started SparkUI at http://192.168.111.54:4040
15/12/10 03:21:15 INFO Utils: Copying 
/root/cloudvision_spark/feature_extraction.py to 
/tmp/spark-f0e72af0-2e43-4b63-8943-21e82ce88e5d/userFiles-c6b319ae-06bc-42f4-9625-9c0d5dc31624/feature_extraction.py
15/12/10 03:21:15 INFO SparkContext: Added file 
file:/root/cloudvision_spark/feature_extraction.py at 
http://192.168.111.54:48803/files/feature_extraction.py with timestamp 
1449717675235
15/12/10 03:21:15 INFO Utils: Copying /root/cloudvision_spark/utils.py to 
/tmp/spark-f0e72af0-2e43-4b63-8943-21e82ce88e5d/userFiles-c6b319ae-06bc-42f4-9625-9c0d5dc31624/utils.py
15/12/10 03:21:15 INFO SparkContext: Added file 
file:/root/cloudvision_spark/utils.py at 
http://192.168.111.54:48803/files/utils.py with timestamp 1449717675253
15/12/10 03:21:15 WARN MetricsSystem: Using default name DAGScheduler for 
source because spark.app.id is not set.
I1210 03:21:15.617425  2479 sched.cpp:157] Version: 0.23.0
I1210 03:21:15.622269  2469 sched.cpp:254] New master detected at 
[email protected]:5050
I1210 03:21:15.622730  2469 sched.cpp:264] No credentials provided. Attempting 
to register without authentication
I1210 03:21:15.625747  2469 sched.cpp:448] Framework registered with 
20151201-132431-913287360-5050-28876-0003
15/12/10 03:21:15 INFO MesosSchedulerBackend: Registered as framework ID 
20151201-132431-913287360-5050-28876-0003
15/12/10 03:21:15 INFO Utils: Successfully started service 
'org.apache.spark.network.netty.NettyBlockTransferService' on port 36499.
15/12/10 03:21:15 INFO NettyBlockTransferService: Server created on 36499
15/12/10 03:21:15 INFO BlockManagerMaster: Trying to register BlockManager
15/12/10 03:21:15 INFO BlockManagerMasterEndpoint: Registering block manager 
192.168.111.54:36499 with 736.1 MB RAM, BlockManagerId(driver, 192.168.111.54, 
36499)
15/12/10 03:21:15 INFO BlockManagerMaster: Registered BlockManager
15/12/10 03:21:19 INFO MemoryStore: Ensuring 1048576 bytes of free space for 
block broadcast_0(free: 771883008, max: 771883008)
15/12/10 03:21:19 INFO MemoryStore: Ensuring 221904 bytes of free space for 
block broadcast_0(free: 771883008, max: 771883008)
15/12/10 03:21:19 INFO MemoryStore: Block broadcast_0 stored as values in 
memory (estimated size 216.7 KB, free 216.7 KB)
15/12/10 03:21:19 INFO MemoryStore: Ensuring 20526 bytes of free space for 
block broadcast_0_piece0(free: 771661104, max: 771883008)
15/12/10 03:21:19 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in 
memory (estimated size 20.0 KB, free 236.7 KB)
15/12/10 03:21:19 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 
192.168.111.54:36499 (size: 20.0 KB, free: 736.1 MB)
15/12/10 03:21:19 INFO SparkContext: Created broadcast 0 from sequenceFile at 
PythonRDD.scala:474
15/12/10 03:21:19 INFO MemoryStore: Ensuring 1048576 bytes of free space for 
block broadcast_1(free: 771640578, max: 771883008)
15/12/10 03:21:19 INFO MemoryStore: Ensuring 221944 bytes of free space for 
block broadcast_1(free: 771640578, max: 771883008)
15/12/10 03:21:19 INFO MemoryStore: Block broadcast_1 stored as values in 
memory (estimated size 216.7 KB, free 453.5 KB)
15/12/10 03:21:19 INFO MemoryStore: Ensuring 20526 bytes of free space for 
block broadcast_1_piece0(free: 771418634, max: 771883008)
15/12/10 03:21:19 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in 
memory (estimated size 20.0 KB, free 473.5 KB)
15/12/10 03:21:19 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 
192.168.111.54:36499 (size: 20.0 KB, free: 736.1 MB)
15/12/10 03:21:19 INFO SparkContext: Created broadcast 1 from broadcast at 
PythonRDD.scala:475
15/12/10 03:21:20 INFO : 
initialize(tachyon://mesos-master-1:19998/caltech-256.hseq, Configuration: 
core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, 
yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml). Connecting 
to Tachyon: tachyon://mesos-master-1:19998/caltech-256.hseq
15/12/10 03:21:20 INFO : Loading Tachyon properties from Hadoop configuration: 
{}
15/12/10 03:21:20 INFO : Tachyon client (version 0.8.0) is trying to connect 
with BlockMaster master @ mesos-master-1/192.168.111.54:19998
15/12/10 03:21:20 INFO : Client registered with BlockMaster master @ 
mesos-master-1/192.168.111.54:19998
15/12/10 03:21:20 INFO : Tachyon client (version 0.8.0) is trying to connect 
with FileSystemMaster master @ mesos-master-1/192.168.111.54:19998
15/12/10 03:21:20 INFO : Client registered with FileSystemMaster master @ 
mesos-master-1/192.168.111.54:19998
15/12/10 03:21:20 INFO : tachyon://mesos-master-1:19998 
tachyon://mesos-master-1:19998 swift://spark.swift1
15/12/10 03:21:20 INFO : 
getFileStatus(tachyon://mesos-master-1:19998/caltech-256.hseq): HDFS Path: 
swift://spark.swift1/caltech-256.hseq TPath: 
tachyon://mesos-master-1:19998/caltech-256.hseq
15/12/10 03:21:20 INFO FileInputFormat: Total input paths to process : 1
15/12/10 03:21:34 INFO SparkContext: Starting job: take at SerDeUtil.scala:201
15/12/10 03:21:34 INFO DAGScheduler: Got job 0 (take at SerDeUtil.scala:201) 
with 1 output partitions
15/12/10 03:21:34 INFO DAGScheduler: Final stage: ResultStage 0 (take at 
SerDeUtil.scala:201)
15/12/10 03:21:34 INFO DAGScheduler: Parents of final stage: List()
15/12/10 03:21:34 INFO DAGScheduler: Missing parents: List()
15/12/10 03:21:34 INFO DAGScheduler: Submitting ResultStage 0 
(MapPartitionsRDD[1] at map at PythonHadoopUtil.scala:181), which has no 
missing parents
15/12/10 03:21:34 INFO MemoryStore: Ensuring 1048576 bytes of free space for 
block broadcast_2(free: 771398108, max: 771883008)
15/12/10 03:21:34 INFO MemoryStore: Ensuring 3376 bytes of free space for block 
broadcast_2(free: 771398108, max: 771883008)
15/12/10 03:21:34 INFO MemoryStore: Block broadcast_2 stored as values in 
memory (estimated size 3.3 KB, free 476.8 KB)
15/12/10 03:21:34 INFO MemoryStore: Ensuring 1974 bytes of free space for block 
broadcast_2_piece0(free: 771394732, max: 771883008)
15/12/10 03:21:34 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in 
memory (estimated size 1974.0 B, free 478.8 KB)
15/12/10 03:21:34 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 
192.168.111.54:36499 (size: 1974.0 B, free: 736.1 MB)
15/12/10 03:21:34 INFO SparkContext: Created broadcast 2 from broadcast at 
DAGScheduler.scala:1003
15/12/10 03:21:34 INFO DAGScheduler: Submitting 1 missing tasks from 
ResultStage 0 (MapPartitionsRDD[1] at map at PythonHadoopUtil.scala:181)
15/12/10 03:21:34 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
15/12/10 03:21:34 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 
mesos-slave-1, partition 0,ANY, 2266 bytes)
15/12/10 03:23:37 INFO BlockManagerMasterEndpoint: Registering block manager 
mesos-slave-1:60277 with 1472.3 MB RAM, 
BlockManagerId(20151107-090736-913287360-5050-8036-S0, mesos-slave-1, 60277)
15/12/10 03:23:49 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 
mesos-slave-1:60277 (size: 1974.0 B, free: 1472.2 MB)
15/12/10 03:23:51 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 
mesos-slave-1:60277 (size: 20.0 KB, free: 1472.2 MB)
15/12/10 03:24:09 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) 
in 154720 ms on mesos-slave-1 (1/1)
15/12/10 03:24:09 INFO DAGScheduler: ResultStage 0 (take at 
SerDeUtil.scala:201) finished in 154.734 s
15/12/10 03:24:09 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have 
all completed, from pool
15/12/10 03:24:09 INFO DAGScheduler: Job 0 finished: take at 
SerDeUtil.scala:201, took 154.847279 s
15/12/10 03:24:09 INFO SequenceFileRDDFunctions: Saving as sequence file of 
type (NullWritable,BytesWritable)
15/12/10 03:24:09 INFO : getWorkingDirectory: /
15/12/10 03:24:09 INFO : getWorkingDirectory: /
15/12/10 03:24:09 INFO : 
getFileStatus(tachyon://mesos-master-1:19998/caltech-256-surf.pickle): HDFS 
Path: swift://spark.swift1/caltech-256-surf.pickle TPath: 
tachyon://mesos-master-1:19998/caltech-256-surf.pickle 
15/12/10 03:24:09 INFO : File does not exist: 
tachyon://mesos-master-1:19998/caltech-256-surf.pickle
15/12/10 03:24:09 INFO deprecation: mapred.tip.id is deprecated. Instead, use 
mapreduce.task.id
15/12/10 03:24:09 INFO deprecation: mapred.task.id is deprecated. Instead, use 
mapreduce.task.attempt.id
15/12/10 03:24:09 INFO deprecation: mapred.task.is.map is deprecated. Instead, 
use mapreduce.task.ismap
15/12/10 03:24:09 INFO deprecation: mapred.task.partition is deprecated. 
Instead, use mapreduce.task.partition
15/12/10 03:24:09 INFO deprecation: mapred.job.id is deprecated. Instead, use 
mapreduce.job.id
15/12/10 03:24:09 INFO : getWorkingDirectory: /
15/12/10 03:24:09 INFO : 
mkdirs(tachyon://mesos-master-1:19998/caltech-256-surf.pickle/_temporary/0, 
rwxrwxrwx)
15/12/10 03:24:10 INFO SparkContext: Starting job: saveAsObjectFile at 
NativeMethodAccessorImpl.java:-2
15/12/10 03:24:10 INFO DAGScheduler: Got job 1 (saveAsObjectFile at 
NativeMethodAccessorImpl.java:-2) with 34 output partitions
15/12/10 03:24:10 INFO DAGScheduler: Final stage: ResultStage 1 
(saveAsObjectFile at NativeMethodAccessorImpl.java:-2)
15/12/10 03:24:10 INFO DAGScheduler: Parents of final stage: List()
15/12/10 03:24:10 INFO DAGScheduler: Missing parents: List()
15/12/10 03:24:10 INFO DAGScheduler: Submitting ResultStage 1 
(MapPartitionsRDD[5] at saveAsObjectFile at NativeMethodAccessorImpl.java:-2), 
which has no missing parents
15/12/10 03:24:10 INFO MemoryStore: Ensuring 1048576 bytes of free space for 
block broadcast_3(free: 771392758, max: 771883008)
15/12/10 03:24:10 INFO MemoryStore: Ensuring 72408 bytes of free space for 
block broadcast_3(free: 771392758, max: 771883008)
15/12/10 03:24:10 INFO MemoryStore: Block broadcast_3 stored as values in 
memory (estimated size 70.7 KB, free 549.5 KB)
15/12/10 03:24:10 INFO MemoryStore: Ensuring 26650 bytes of free space for 
block broadcast_3_piece0(free: 771320350, max: 771883008)
15/12/10 03:24:10 INFO MemoryStore: Block broadcast_3_piece0 stored as bytes in 
memory (estimated size 26.0 KB, free 575.5 KB)
15/12/10 03:24:10 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on 
192.168.111.54:36499 (size: 26.0 KB, free: 736.1 MB)
15/12/10 03:24:10 INFO SparkContext: Created broadcast 3 from broadcast at 
DAGScheduler.scala:1003
15/12/10 03:24:10 INFO DAGScheduler: Submitting 34 missing tasks from 
ResultStage 1 (MapPartitionsRDD[5] at saveAsObjectFile at 
NativeMethodAccessorImpl.java:-2)
15/12/10 03:24:10 INFO TaskSchedulerImpl: Adding task set 1.0 with 34 tasks
15/12/10 03:24:10 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, 
mesos-slave-1, partition 0,ANY, 2266 bytes)
15/12/10 03:24:10 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 2, 
mesos-slave-2, partition 1,ANY, 2266 bytes)
15/12/10 03:24:10 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on 
mesos-slave-1:60277 (size: 26.0 KB, free: 1472.2 MB)
15/12/10 03:24:11 INFO TaskSetManager: Starting task 2.0 in stage 1.0 (TID 3, 
mesos-slave-3, partition 2,ANY, 2266 bytes)
15/12/10 03:24:12 WARN TaskSetManager: Lost task 2.0 in stage 1.0 (TID 3, 
mesos-slave-3): UnknownReason
15/12/10 03:24:12 INFO TaskSetManager: Starting task 2.1 in stage 1.0 (TID 4, 
mesos-slave-3, partition 2,ANY, 2266 bytes)
15/12/10 03:24:12 WARN TaskSetManager: Lost task 2.1 in stage 1.0 (TID 4, 
mesos-slave-3): UnknownReason
15/12/10 03:24:12 INFO TaskSetManager: Starting task 2.2 in stage 1.0 (TID 5, 
mesos-slave-3, partition 2,ANY, 2266 bytes)
15/12/10 03:24:12 WARN TaskSetManager: Lost task 1.0 in stage 1.0 (TID 2, 
mesos-slave-2): UnknownReason
15/12/10 03:24:12 WARN TaskSetManager: Lost task 2.2 in stage 1.0 (TID 5, 
mesos-slave-3): UnknownReason
15/12/10 03:24:12 INFO TaskSetManager: Starting task 2.3 in stage 1.0 (TID 6, 
mesos-slave-2, partition 2,ANY, 2266 bytes)
15/12/10 03:24:12 INFO TaskSetManager: Starting task 1.1 in stage 1.0 (TID 7, 
mesos-slave-3, partition 1,ANY, 2266 bytes)
5/12/10 03:24:12 WARN TaskSetManager: Lost task 2.0 in stage 1.0 (TID 3, 
mesos-slave-3): UnknownReason
15/12/10 03:24:12 INFO TaskSetManager: Starting task 2.1 in stage 1.0 (TID 4, 
mesos-slave-3, partition 2,ANY, 2266 bytes)
15/12/10 03:24:12 WARN TaskSetManager: Lost task 2.1 in stage 1.0 (TID 4, 
mesos-slave-3): UnknownReason
15/12/10 03:24:12 INFO TaskSetManager: Starting task 2.2 in stage 1.0 (TID 5, 
mesos-slave-3, partition 2,ANY, 2266 bytes)
15/12/10 03:24:12 WARN TaskSetManager: Lost task 1.0 in stage 1.0 (TID 2, 
mesos-slave-2): UnknownReason
15/12/10 03:24:12 WARN TaskSetManager: Lost task 2.2 in stage 1.0 (TID 5, 
mesos-slave-3): UnknownReason
15/12/10 03:24:12 INFO TaskSetManager: Starting task 2.3 in stage 1.0 (TID 6, 
mesos-slave-2, partition 2,ANY, 2266 bytes)
15/12/10 03:24:12 INFO TaskSetManager: Starting task 1.1 in stage 1.0 (TID 7, 
mesos-slave-3, partition 1,ANY, 2266 bytes)
15/12/10 03:24:13 WARN TaskSetManager: Lost task 2.3 in stage 1.0 (TID 6, 
mesos-slave-2): UnknownReason
15/12/10 03:24:13 ERROR TaskSetManager: Task 2 in stage 1.0 failed 4 times; 
aborting job
15/12/10 03:24:13 INFO TaskSchedulerImpl: Cancelling stage 1
15/12/10 03:24:13 INFO TaskSchedulerImpl: Stage 1 was cancelled
15/12/10 03:24:13 WARN TaskSetManager: Lost task 1.1 in stage 1.0 (TID 7, 
mesos-slave-3): UnknownReason
15/12/10 03:24:13 INFO DAGScheduler: ResultStage 1 (saveAsObjectFile at 
NativeMethodAccessorImpl.java:-2) failed in 2.581 s
15/12/10 03:24:13 INFO DAGScheduler: Job 1 failed: saveAsObjectFile at 
NativeMethodAccessorImpl.java:-2, took 2.940948 s
Traceback (most recent call last):
  File "/root/cloudvision_spark/feature_extraction.py", line 47, in <module>
    features.saveAsPickleFile(feature_sequencefile_path)
  File 
"/srv/spark-1.6.0-SNAPSHOT-bin-spark-master-tachyon-0.8.0/python/lib/pyspark.zip/pyspark/rdd.py",
 line 1455, in saveAsPickleFile
  File 
"/srv/spark-1.6.0-SNAPSHOT-bin-spark-master-tachyon-0.8.0/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py",
 line 813, in __call__
  File 
"/srv/spark-1.6.0-SNAPSHOT-bin-spark-master-tachyon-0.8.0/python/lib/py4j-0.9-src.zip/py4j/protocol.py",
 line 308, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling 
o29.saveAsObjectFile.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in 
stage 1.0 failed 4 times, most recent failure: Lost task 2.3 in stage 1.0 (TID 
6, mesos-slave-2): UnknownReason
Driver stacktrace:
        at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1428)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1416)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1415)
        at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1415)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
        at scala.Option.foreach(Option.scala:236)
        at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
        at 
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1814)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1827)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1904)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1213)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1156)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1156)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
        at 
org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:1156)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply$mcV$sp(PairRDDFunctions.scala:1060)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1026)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1026)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
        at 
org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:1026)
        at 
org.apache.spark.rdd.SequenceFileRDDFunctions$$anonfun$saveAsSequenceFile$1.apply$mcV$sp(SequenceFileRDDFunctions.scala:105)
        at 
org.apache.spark.rdd.SequenceFileRDDFunctions$$anonfun$saveAsSequenceFile$1.apply(SequenceFileRDDFunctions.scala:90)
        at 
org.apache.spark.rdd.SequenceFileRDDFunctions$$anonfun$saveAsSequenceFile$1.apply(SequenceFileRDDFunctions.scala:90)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
        at 
org.apache.spark.rdd.SequenceFileRDDFunctions.saveAsSequenceFile(SequenceFileRDDFunctions.scala:90)
        at 
org.apache.spark.rdd.RDD$$anonfun$saveAsObjectFile$1.apply$mcV$sp(RDD.scala:1445)
        at 
org.apache.spark.rdd.RDD$$anonfun$saveAsObjectFile$1.apply(RDD.scala:1445)
        at 
org.apache.spark.rdd.RDD$$anonfun$saveAsObjectFile$1.apply(RDD.scala:1445)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
        at org.apache.spark.rdd.RDD.saveAsObjectFile(RDD.scala:1442)
        at 
org.apache.spark.api.java.JavaRDDLike$class.saveAsObjectFile(JavaRDDLike.scala:517)
        at 
org.apache.spark.api.java.AbstractJavaRDDLike.saveAsObjectFile(JavaRDDLike.scala:45)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
        at py4j.Gateway.invoke(Gateway.java:259)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:209)
        at java.lang.Thread.run(Thread.java:745)

15/12/10 03:24:13 INFO SparkContext: Invoking stop() from shutdown hook
15/12/10 03:24:13 INFO SparkUI: Stopped Spark web UI at 
http://192.168.111.54:4040
15/12/10 03:24:13 INFO DAGScheduler: Stopping DAGScheduler
I1210 03:24:13.387411  2556 sched.cpp:1591] Asked to stop the driver
I1210 03:24:13.387644  2476 sched.cpp:835] Stopping framework 
'20151201-132431-913287360-5050-28876-0003'
15/12/10 03:24:13 INFO MesosSchedulerBackend: driver.run() returned with code 
DRIVER_STOPPED
15/12/10 03:24:13 INFO MapOutputTrackerMasterEndpoint: 
MapOutputTrackerMasterEndpoint stopped!
15/12/10 03:24:13 INFO MemoryStore: MemoryStore cleared
15/12/10 03:24:13 INFO BlockManager: BlockManager stopped
15/12/10 03:24:13 INFO BlockManagerMaster: BlockManagerMaster stopped




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to