[error] 14/04/25 23:09:57 INFO slf4j.Slf4jLogger: Slf4jLogger started
[error] 14/04/25 23:09:57 INFO Remoting: Starting remoting
[error] 14/04/25 23:09:58 INFO Remoting: Remoting started; listening on
addresses :[akka.tcp://spark@cm03:59999]
[error] 14/04/25 23:09:58 INFO Remoting: Remoting now listens on addresses:
[akka.tcp://spark@cm03:59999]
[error] 14/04/25 23:09:58 INFO spark.SparkEnv: Registering
BlockManagerMaster
[error] 14/04/25 23:09:58 INFO storage.DiskBlockManager: Created local
directory at /tmp/spark-local-20140425230958-f550
[error] 14/04/25 23:09:58 INFO storage.MemoryStore: MemoryStore started with
capacity 1638.6 MB.
[error] 14/04/25 23:09:58 INFO network.ConnectionManager: Bound socket to
port 60395 with id = ConnectionManagerId(cm03,60395)
[error] 14/04/25 23:09:58 INFO storage.BlockManagerMaster: Trying to
register BlockManager
[error] 14/04/25 23:09:58 INFO
storage.BlockManagerMasterActor$BlockManagerInfo: Registering block manager
cm03:60395 with 1638.6 MB RAM
[error] 14/04/25 23:09:58 INFO storage.BlockManagerMaster: Registered
BlockManager
[error] 14/04/25 23:09:58 INFO spark.HttpServer: Starting HTTP Server
[error] 14/04/25 23:09:58 INFO server.Server: jetty-7.6.8.v20121106
[error] 14/04/25 23:09:58 INFO server.AbstractConnector: Started
SocketConnector@0.0.0.0:55020
[error] 14/04/25 23:09:58 INFO broadcast.HttpBroadcast: Broadcast server
started at http://192.168.100.172:55020
[error] 14/04/25 23:09:58 INFO spark.SparkEnv: Registering MapOutputTracker
[error] 14/04/25 23:09:58 INFO spark.HttpFileServer: HTTP File server
directory is /tmp/spark-57df291f-b749-46f7-a1ed-98609e2ad9d8
[error] 14/04/25 23:09:58 INFO spark.HttpServer: Starting HTTP Server
[error] 14/04/25 23:09:58 INFO server.Server: jetty-7.6.8.v20121106
[error] 14/04/25 23:09:58 INFO server.AbstractConnector: Started
SocketConnector@0.0.0.0:47145
[error] 14/04/25 23:09:58 INFO server.Server: jetty-7.6.8.v20121106
[error] 14/04/25 23:09:58 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/storage/rdd,null}
[error] 14/04/25 23:09:58 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/storage,null}
[error] 14/04/25 23:09:58 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/stages/stage,null}
[error] 14/04/25 23:09:58 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/stages/pool,null}
[error] 14/04/25 23:09:58 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/stages,null}
[error] 14/04/25 23:09:58 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/environment,null}
[error] 14/04/25 23:09:58 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/executors,null}
[error] 14/04/25 23:09:58 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/metrics/json,null}
[error] 14/04/25 23:09:58 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/static,null}
[error] 14/04/25 23:09:58 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/,null}
[error] 14/04/25 23:09:58 INFO server.AbstractConnector: Started
SelectChannelConnector@0.0.0.0:4040
[error] 14/04/25 23:09:58 INFO ui.SparkUI: Started Spark Web UI at
http://cm03:4040
[error] 14/04/25 23:09:58 INFO spark.SparkContext: Added JAR
target/scala-2.10/simple-project_2.10-1.0.jar at
http://192.168.100.172:47145/jars/simple-project_2.10-1.0.jar with timestamp
1398434998858
[error] 14/04/25 23:09:59 INFO storage.MemoryStore: ensureFreeSpace(32856)
called with curMem=0, maxMem=1718196633
[error] 14/04/25 23:09:59 INFO storage.MemoryStore: Block broadcast_0 stored
as values to memory (estimated size 32.1 KB, free 1638.6 MB)
[error] 14/04/25 23:09:59 WARN util.NativeCodeLoader: Unable to load
native-hadoop library for your platform... using builtin-java classes where
applicable
[error] 14/04/25 23:09:59 WARN snappy.LoadSnappy: Snappy native library not
loaded
[error] 14/04/25 23:09:59 INFO mapred.FileInputFormat: Total input paths to
process : 1
[error] 14/04/25 23:09:59 INFO spark.SparkContext: Starting job: count at
test.scala:11
[error] 14/04/25 23:09:59 INFO scheduler.DAGScheduler: Got job 0 (count at
test.scala:11) with 2 output partitions (allowLocal=false)
[error] 14/04/25 23:09:59 INFO scheduler.DAGScheduler: Final stage: Stage 0
(count at test.scala:11)
[error] 14/04/25 23:09:59 INFO scheduler.DAGScheduler: Parents of final
stage: List()
[error] 14/04/25 23:09:59 INFO scheduler.DAGScheduler: Missing parents:
List()
[error] 14/04/25 23:09:59 INFO scheduler.DAGScheduler: Submitting Stage 0
(FilteredRDD[2] at filter at test.scala:11), which has no missing parents
[error] 14/04/25 23:09:59 INFO scheduler.DAGScheduler: Submitting 2 missing
tasks from Stage 0 (FilteredRDD[2] at filter at test.scala:11)
[error] 14/04/25 23:09:59 INFO scheduler.TaskSchedulerImpl: Adding task set
0.0 with 2 tasks
[error] 14/04/25 23:09:59 INFO scheduler.TaskSetManager: Starting task 0.0:0
as TID 0 on executor localhost: localhost (PROCESS_LOCAL)
[error] 14/04/25 23:09:59 INFO scheduler.TaskSetManager: Serialized task
0.0:0 as 1730 bytes in 6 ms
[error] 14/04/25 23:09:59 INFO executor.Executor: Running task ID 0
[error] 14/04/25 23:09:59 INFO executor.Executor: Fetching
http://192.168.100.172:47145/jars/simple-project_2.10-1.0.jar with timestamp
1398434998858
[error] 14/04/25 23:09:59 INFO util.Utils: Fetching
http://192.168.100.172:47145/jars/simple-project_2.10-1.0.jar to
/tmp/fetchFileTemp4281143700727127862.tmp
[error] 14/04/25 23:09:59 INFO executor.Executor: Adding
file:/tmp/spark-c9d93422-c4bc-4c55-a980-55e252760e71/simple-project_2.10-1.0.jar
to class loader
[error] 14/04/25 23:09:59 INFO storage.BlockManager: Found block broadcast_0
locally
[error] 14/04/25 23:09:59 INFO spark.CacheManager: Partition rdd_1_0 not
found, computing it
[error] 14/04/25 23:09:59 INFO rdd.HadoopRDD: Input split:
file:/home/exobrain/install/spark-0.9.1/README.md:0+1949
[error] 14/04/25 23:09:59 INFO storage.MemoryStore: ensureFreeSpace(6504)
called with curMem=32856, maxMem=1718196633
[error] 14/04/25 23:09:59 INFO storage.MemoryStore: Block rdd_1_0 stored as
values to memory (estimated size 6.4 KB, free 1638.6 MB)
[error] 14/04/25 23:09:59 INFO
storage.BlockManagerMasterActor$BlockManagerInfo: Added rdd_1_0 in memory on
cm03:60395 (size: 6.4 KB, free: 1638.6 MB)
[error] 14/04/25 23:09:59 INFO storage.BlockManagerMaster: Updated info of
block rdd_1_0
[error] 14/04/25 23:09:59 INFO executor.Executor: Serialized size of result
for 0 is 563
[error] 14/04/25 23:09:59 INFO executor.Executor: Sending result for 0
directly to driver
[error] 14/04/25 23:09:59 INFO executor.Executor: Finished task ID 0
[error] 14/04/25 23:09:59 INFO scheduler.TaskSetManager: Starting task 0.0:1
as TID 1 on executor localhost: localhost (PROCESS_LOCAL)
[error] 14/04/25 23:09:59 INFO scheduler.TaskSetManager: Serialized task
0.0:1 as 1730 bytes in 1 ms
[error] 14/04/25 23:09:59 INFO executor.Executor: Running task ID 1
[error] 14/04/25 23:09:59 INFO scheduler.TaskSetManager: Finished TID 0 in
212 ms on localhost (progress: 1/2)
[error] 14/04/25 23:09:59 INFO storage.BlockManager: Found block broadcast_0
locally
[error] 14/04/25 23:09:59 INFO scheduler.DAGScheduler: Completed
ResultTask(0, 0)
[error] 14/04/25 23:09:59 INFO spark.CacheManager: Partition rdd_1_1 not
found, computing it
[error] 14/04/25 23:09:59 INFO rdd.HadoopRDD: Input split:
file:/home/exobrain/install/spark-0.9.1/README.md:1949+1950
[error] 14/04/25 23:09:59 INFO storage.MemoryStore: ensureFreeSpace(6216)
called with curMem=39360, maxMem=1718196633
[error] 14/04/25 23:09:59 INFO storage.MemoryStore: Block rdd_1_1 stored as
values to memory (estimated size 6.1 KB, free 1638.6 MB)
[error] 14/04/25 23:09:59 INFO
storage.BlockManagerMasterActor$BlockManagerInfo: Added rdd_1_1 in memory on
cm03:60395 (size: 6.1 KB, free: 1638.6 MB)
[error] 14/04/25 23:09:59 INFO storage.BlockManagerMaster: Updated info of
block rdd_1_1
[error] 14/04/25 23:09:59 INFO executor.Executor: Serialized size of result
for 1 is 563
[error] 14/04/25 23:09:59 INFO executor.Executor: Sending result for 1
directly to driver
[error] 14/04/25 23:09:59 INFO executor.Executor: Finished task ID 1
[error] 14/04/25 23:09:59 INFO scheduler.DAGScheduler: Completed
ResultTask(0, 1)
[error] 14/04/25 23:09:59 INFO scheduler.TaskSetManager: Finished TID 1 in
24 ms on localhost (progress: 2/2)
[error] 14/04/25 23:09:59 INFO scheduler.DAGScheduler: Stage 0 (count at
test.scala:11) finished in 0.244 s
[error] 14/04/25 23:09:59 INFO scheduler.TaskSchedulerImpl: Removed TaskSet
0.0, whose tasks have all completed, from pool 
[error] 14/04/25 23:09:59 INFO spark.SparkContext: Job finished: count at
test.scala:11, took 0.358218887 s
[error] 14/04/25 23:09:59 INFO spark.SparkContext: Starting job: count at
test.scala:12
[error] 14/04/25 23:09:59 INFO scheduler.DAGScheduler: Got job 1 (count at
test.scala:12) with 2 output partitions (allowLocal=false)
[error] 14/04/25 23:09:59 INFO scheduler.DAGScheduler: Final stage: Stage 1
(count at test.scala:12)
[error] 14/04/25 23:09:59 INFO scheduler.DAGScheduler: Parents of final
stage: List()
[error] 14/04/25 23:09:59 INFO scheduler.DAGScheduler: Missing parents:
List()
[error] 14/04/25 23:09:59 INFO scheduler.DAGScheduler: Submitting Stage 1
(FilteredRDD[3] at filter at test.scala:12), which has no missing parents
[error] 14/04/25 23:09:59 INFO scheduler.DAGScheduler: Submitting 2 missing
tasks from Stage 1 (FilteredRDD[3] at filter at test.scala:12)
[error] 14/04/25 23:09:59 INFO scheduler.TaskSchedulerImpl: Adding task set
1.0 with 2 tasks
[error] 14/04/25 23:09:59 INFO scheduler.TaskSetManager: Starting task 1.0:0
as TID 2 on executor localhost: localhost (PROCESS_LOCAL)
[error] 14/04/25 23:09:59 INFO scheduler.TaskSetManager: Serialized task
1.0:0 as 1734 bytes in 1 ms
[error] 14/04/25 23:09:59 INFO executor.Executor: Running task ID 2
[error] 14/04/25 23:09:59 INFO storage.BlockManager: Found block broadcast_0
locally
[error] 14/04/25 23:09:59 INFO storage.BlockManager: Found block rdd_1_0
locally
[error] 14/04/25 23:09:59 INFO executor.Executor: Serialized size of result
for 2 is 563
[error] 14/04/25 23:09:59 INFO executor.Executor: Sending result for 2
directly to driver
[error] 14/04/25 23:09:59 INFO executor.Executor: Finished task ID 2
[error] 14/04/25 23:09:59 INFO scheduler.TaskSetManager: Starting task 1.0:1
as TID 3 on executor localhost: localhost (PROCESS_LOCAL)
[error] 14/04/25 23:09:59 INFO scheduler.TaskSetManager: Serialized task
1.0:1 as 1734 bytes in 0 ms
[error] 14/04/25 23:09:59 INFO executor.Executor: Running task ID 3
[error] 14/04/25 23:09:59 INFO scheduler.DAGScheduler: Completed
ResultTask(1, 0)
[error] 14/04/25 23:09:59 INFO scheduler.TaskSetManager: Finished TID 2 in
13 ms on localhost (progress: 1/2)
[error] 14/04/25 23:09:59 INFO storage.BlockManager: Found block broadcast_0
locally
[error] 14/04/25 23:09:59 INFO storage.BlockManager: Found block rdd_1_1
locally
[error] 14/04/25 23:09:59 INFO executor.Executor: Serialized size of result
for 3 is 563
[error] 14/04/25 23:09:59 INFO executor.Executor: Sending result for 3
directly to driver
[error] 14/04/25 23:09:59 INFO executor.Executor: Finished task ID 3
[error] 14/04/25 23:09:59 INFO scheduler.DAGScheduler: Completed
ResultTask(1, 1)
[error] 14/04/25 23:09:59 INFO scheduler.TaskSetManager: Finished TID 3 in 9
ms on localhost (progress: 2/2)
[error] 14/04/25 23:09:59 INFO scheduler.DAGScheduler: Stage 1 (count at
test.scala:12) finished in 0.021 s
[error] 14/04/25 23:09:59 INFO scheduler.TaskSchedulerImpl: Removed TaskSet
1.0, whose tasks have all completed, from pool 
[error] 14/04/25 23:09:59 INFO spark.SparkContext: Job finished: count at
test.scala:12, took 0.034754522 s
[info] Lines with a: 62, Lines with b: 35




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/strange-error-tp4830.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to