No I’m not using Dataproc.
I’m simply running on my local machine. I started a local Spark cluster with sbin/start-master.sh and sbin/start-slave.sh. Then I submitted my Beam job to that cluster.
The gs file is the kinglear.txt from Beam’s example code and it should be public. 

My full stack trace is attached.

Thanks,
Chaoran

log4j:WARN No appenders could be found for logger 
(org.apache.beam.sdk.options.PipelineOptionsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/01/23 16:31:39 INFO SparkContext: Running Spark version 1.6.3
17/01/23 16:31:39 WARN NativeCodeLoader: Unable to load native-hadoop library 
for your platform... using builtin-java classes where applicable
17/01/23 16:31:44 INFO SecurityManager: Changing view acls to: stephen
17/01/23 16:31:44 INFO SecurityManager: Changing modify acls to: stephen
17/01/23 16:31:44 INFO SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users with view permissions: Set(stephen); users 
with modify permissions: Set(stephen)
17/01/23 16:31:44 INFO Utils: Successfully started service 'sparkDriver' on 
port 53235.
17/01/23 16:31:45 INFO Slf4jLogger: Slf4jLogger started
17/01/23 16:31:45 INFO Remoting: Starting remoting
17/01/23 16:31:45 INFO Remoting: Remoting started; listening on addresses 
:[akka.tcp://[email protected]:53236]
17/01/23 16:31:45 INFO Utils: Successfully started service 
'sparkDriverActorSystem' on port 53236.
17/01/23 16:31:45 INFO SparkEnv: Registering MapOutputTracker
17/01/23 16:31:45 INFO SparkEnv: Registering BlockManagerMaster
17/01/23 16:31:45 INFO DiskBlockManager: Created local directory at 
/private/var/folders/7k/mv0jws6j7ws8pnp350h1ndtw0000gn/T/blockmgr-358c683e-ae91-49b1-875e-0b037dea5d08
17/01/23 16:31:45 INFO MemoryStore: MemoryStore started with capacity 511.1 MB
17/01/23 16:31:45 INFO SparkEnv: Registering OutputCommitCoordinator
17/01/23 16:31:45 INFO Utils: Successfully started service 'SparkUI' on port 
4040.
17/01/23 16:31:45 INFO SparkUI: Started SparkUI at http://10.71.1.72:4040
17/01/23 16:31:45 INFO HttpFileServer: HTTP File server directory is 
/private/var/folders/7k/mv0jws6j7ws8pnp350h1ndtw0000gn/T/spark-70afbd21-f270-4670-88d6-2895e5f23ea5/httpd-04f079ac-4416-4e74-9dd3-5bf70f70ee8e
17/01/23 16:31:45 INFO HttpServer: Starting HTTP Server
17/01/23 16:31:45 INFO Utils: Successfully started service 'HTTP file server' 
on port 53237.
17/01/23 16:31:46 INFO SparkContext: Added JAR 
file:/Users/stephen/word-count-beam/target/word-count-beam-bundled-0.1.jar at 
http://10.71.1.72:53237/jars/word-count-beam-bundled-0.1.jar with timestamp 
1485207106193
17/01/23 16:31:46 INFO AppClient$ClientEndpoint: Connecting to master 
spark://Chaoran-MacBook-Pro.local:7077...
17/01/23 16:31:46 INFO SparkDeploySchedulerBackend: Connected to Spark cluster 
with app ID app-20170123163146-0000
17/01/23 16:31:46 INFO Utils: Successfully started service 
'org.apache.spark.network.netty.NettyBlockTransferService' on port 53239.
17/01/23 16:31:46 INFO NettyBlockTransferService: Server created on 53239
17/01/23 16:31:46 INFO BlockManagerMaster: Trying to register BlockManager
17/01/23 16:31:46 INFO BlockManagerMasterEndpoint: Registering block manager 
10.71.1.72:53239 with 511.1 MB RAM, BlockManagerId(driver, 10.71.1.72, 53239)
17/01/23 16:31:46 INFO AppClient$ClientEndpoint: Executor added: 
app-20170123163146-0000/0 on worker-20170123162954-10.71.1.72-53220 
(10.71.1.72:53220) with 8 cores
17/01/23 16:31:46 INFO SparkDeploySchedulerBackend: Granted executor ID 
app-20170123163146-0000/0 on hostPort 10.71.1.72:53220 with 8 cores, 1024.0 MB 
RAM
17/01/23 16:31:46 INFO BlockManagerMaster: Registered BlockManager
17/01/23 16:31:46 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready 
for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
17/01/23 16:31:46 INFO AppClient$ClientEndpoint: Executor updated: 
app-20170123163146-0000/0 is now RUNNING
17/01/23 16:31:46 INFO SparkRunner$Evaluator: Evaluating Read(CompressedSource)
17/01/23 16:31:46 INFO SparkRunner$Evaluator: Evaluating ParDo(ExtractWords)
17/01/23 16:31:47 INFO SparkRunner$Evaluator: Evaluating AnonymousParDo
17/01/23 16:31:47 INFO SparkRunner$Evaluator: Entering directly-translatable 
composite transform: 'WordCount.CountWords/Count.PerElement/Count.PerKey'
17/01/23 16:31:47 INFO SparkRunner$Evaluator: Evaluating Count.PerKey 
[Combine.PerKey]
17/01/23 16:31:47 INFO FileBasedSource: Matched 1 files for pattern 
gs://apache-beam-samples/shakespeare/kinglear.txt
17/01/23 16:31:47 INFO SparkRunner$Evaluator: Evaluating AnonymousParDo
17/01/23 16:31:47 INFO SparkRunner$Evaluator: Entering directly-translatable 
composite transform: 'WriteCounts/Write/Create.Values'
17/01/23 16:31:47 INFO SparkRunner$Evaluator: Evaluating Create.Values
17/01/23 16:31:47 INFO SparkRunner$Evaluator: Evaluating AnonymousParDo
17/01/23 16:31:47 INFO SparkRunner$Evaluator: Entering directly-translatable 
composite transform: 'WriteCounts/Write/View.AsSingleton'
17/01/23 16:31:47 INFO SparkRunner$Evaluator: Evaluating View.AsSingleton
17/01/23 16:31:47 INFO SparkContext: Starting job: collect at 
BoundedDataset.java:88
17/01/23 16:31:47 INFO DAGScheduler: Got job 0 (collect at 
BoundedDataset.java:88) with 2 output partitions
17/01/23 16:31:47 INFO DAGScheduler: Final stage: ResultStage 0 (collect at 
BoundedDataset.java:88)
17/01/23 16:31:47 INFO DAGScheduler: Parents of final stage: List()
17/01/23 16:31:47 INFO DAGScheduler: Missing parents: List()
17/01/23 16:31:47 INFO DAGScheduler: Submitting ResultStage 0 
(MapPartitionsRDD[14] at map at BoundedDataset.java:87), which has no missing 
parents
17/01/23 16:31:47 INFO MemoryStore: Block broadcast_0 stored as values in 
memory (estimated size 7.8 KB, free 511.1 MB)
17/01/23 16:31:54 INFO SparkDeploySchedulerBackend: Registered executor 
NettyRpcEndpointRef(null) (10.71.1.72:53246) with ID 0
17/01/23 16:31:54 INFO BlockManagerMasterEndpoint: Registering block manager 
10.71.1.72:53248 with 511.1 MB RAM, BlockManagerId(0, 10.71.1.72, 53248)
17/01/23 16:31:59 INFO Reflections: Reflections took 11475 ms to scan 17 urls, 
producing 22296 keys and 190279 values
17/01/23 16:31:59 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in 
memory (estimated size 4.2 KB, free 511.1 MB)
17/01/23 16:31:59 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 
10.71.1.72:53239 (size: 4.2 KB, free: 511.1 MB)
17/01/23 16:31:59 INFO SparkContext: Created broadcast 0 from broadcast at 
DAGScheduler.scala:1006
17/01/23 16:31:59 INFO DAGScheduler: Submitting 2 missing tasks from 
ResultStage 0 (MapPartitionsRDD[14] at map at BoundedDataset.java:87)
17/01/23 16:31:59 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
17/01/23 16:31:59 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 
10.71.1.72, partition 0,PROCESS_LOCAL, 2082 bytes)
17/01/23 16:31:59 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, 
10.71.1.72, partition 1,PROCESS_LOCAL, 3711 bytes)
17/01/23 16:32:11 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 
10.71.1.72:53248 (size: 4.2 KB, free: 511.1 MB)
17/01/23 16:32:11 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) 
in 12456 ms on 10.71.1.72 (1/2)
17/01/23 16:32:11 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) 
in 12438 ms on 10.71.1.72 (2/2)
17/01/23 16:32:11 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have 
all completed, from pool
17/01/23 16:32:11 INFO DAGScheduler: ResultStage 0 (collect at 
BoundedDataset.java:88) finished in 12.462 s
17/01/23 16:32:11 INFO DAGScheduler: Job 0 finished: collect at 
BoundedDataset.java:88, took 24.296735 s
17/01/23 16:32:12 INFO SparkRunner$Evaluator: Evaluating Window.Into()
17/01/23 16:32:12 INFO SparkRunner$Evaluator: Evaluating ParDo(WriteBundles)
17/01/23 16:32:12 INFO MemoryStore: Block broadcast_1 stored as values in 
memory (estimated size 1680.0 B, free 511.1 MB)
17/01/23 16:32:12 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in 
memory (estimated size 1034.0 B, free 511.1 MB)
17/01/23 16:32:12 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 
10.71.1.72:53239 (size: 1034.0 B, free: 511.1 MB)
17/01/23 16:32:12 INFO SparkContext: Created broadcast 1 from broadcast at 
BroadcastHelper.java:111
17/01/23 16:32:12 INFO SparkRunner$Evaluator: Entering directly-translatable 
composite transform: 'WriteCounts/Write/View.AsIterable'
17/01/23 16:32:12 INFO SparkRunner$Evaluator: Evaluating View.AsIterable
17/01/23 16:32:12 INFO SparkContext: Starting job: collect at 
BoundedDataset.java:88
17/01/23 16:32:12 INFO DAGScheduler: Registering RDD 4 (mapToPair at 
GroupCombineFunctions.java:237)
17/01/23 16:32:12 INFO DAGScheduler: Got job 1 (collect at 
BoundedDataset.java:88) with 2 output partitions
17/01/23 16:32:12 INFO DAGScheduler: Final stage: ResultStage 2 (collect at 
BoundedDataset.java:88)
17/01/23 16:32:12 INFO DAGScheduler: Parents of final stage: 
List(ShuffleMapStage 1)
17/01/23 16:32:12 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 1)
17/01/23 16:32:12 INFO DAGScheduler: Submitting ShuffleMapStage 1 
(MapPartitionsRDD[4] at mapToPair at GroupCombineFunctions.java:237), which has 
no missing parents
17/01/23 16:32:12 INFO MemoryStore: Block broadcast_2 stored as values in 
memory (estimated size 12.7 KB, free 511.1 MB)
17/01/23 16:32:12 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in 
memory (estimated size 6.1 KB, free 511.1 MB)
17/01/23 16:32:12 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 
10.71.1.72:53239 (size: 6.1 KB, free: 511.1 MB)
17/01/23 16:32:12 INFO SparkContext: Created broadcast 2 from broadcast at 
DAGScheduler.scala:1006
17/01/23 16:32:12 INFO DAGScheduler: Submitting 2 missing tasks from 
ShuffleMapStage 1 (MapPartitionsRDD[4] at mapToPair at 
GroupCombineFunctions.java:237)
17/01/23 16:32:12 INFO TaskSchedulerImpl: Adding task set 1.0 with 2 tasks
17/01/23 16:32:12 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 2, 
10.71.1.72, partition 0,PROCESS_LOCAL, 3335 bytes)
17/01/23 16:32:12 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 3, 
10.71.1.72, partition 1,PROCESS_LOCAL, 3335 bytes)
17/01/23 16:32:12 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 
10.71.1.72:53248 (size: 6.1 KB, free: 511.1 MB)
17/01/23 16:32:12 WARN TaskSetManager: Lost task 1.0 in stage 1.0 (TID 3, 
10.71.1.72): java.lang.RuntimeException: Failed to read from reader.
        at 
org.apache.beam.runners.spark.io.SourceRDD$Bounded$1.hasNext(SourceRDD.java:136)
        at 
scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
        at 
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
        at 
org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:284)
        at 
org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:171)
        at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:78)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:268)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
        at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        at org.apache.spark.scheduler.Task.run(Task.scala:89)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Unable to find handler for 
gs://apache-beam-samples/shakespeare/kinglear.txt
        at 
org.apache.beam.sdk.util.IOChannelUtils.getFactory(IOChannelUtils.java:307)
        at 
org.apache.beam.sdk.io.FileBasedSource$FileBasedReader.startImpl(FileBasedSource.java:528)
        at 
org.apache.beam.sdk.io.OffsetBasedSource$OffsetBasedReader.start(OffsetBasedSource.java:271)
        at 
org.apache.beam.runners.spark.io.SourceRDD$Bounded$1.hasNext(SourceRDD.java:125)
        ... 25 more

17/01/23 16:32:12 INFO TaskSetManager: Lost task 0.0 in stage 1.0 (TID 2) on 
executor 10.71.1.72: java.lang.RuntimeException (Failed to read from reader.) 
[duplicate 1]
17/01/23 16:32:12 INFO TaskSetManager: Starting task 0.1 in stage 1.0 (TID 4, 
10.71.1.72, partition 0,PROCESS_LOCAL, 3335 bytes)
17/01/23 16:32:12 INFO TaskSetManager: Starting task 1.1 in stage 1.0 (TID 5, 
10.71.1.72, partition 1,PROCESS_LOCAL, 3335 bytes)
17/01/23 16:32:12 INFO TaskSetManager: Lost task 0.1 in stage 1.0 (TID 4) on 
executor 10.71.1.72: java.lang.RuntimeException (Failed to read from reader.) 
[duplicate 2]
17/01/23 16:32:12 INFO TaskSetManager: Starting task 0.2 in stage 1.0 (TID 6, 
10.71.1.72, partition 0,PROCESS_LOCAL, 3335 bytes)
17/01/23 16:32:12 INFO TaskSetManager: Lost task 1.1 in stage 1.0 (TID 5) on 
executor 10.71.1.72: java.lang.RuntimeException (Failed to read from reader.) 
[duplicate 3]
17/01/23 16:32:12 INFO TaskSetManager: Starting task 1.2 in stage 1.0 (TID 7, 
10.71.1.72, partition 1,PROCESS_LOCAL, 3335 bytes)
17/01/23 16:32:12 INFO TaskSetManager: Lost task 0.2 in stage 1.0 (TID 6) on 
executor 10.71.1.72: java.lang.RuntimeException (Failed to read from reader.) 
[duplicate 4]
17/01/23 16:32:12 INFO TaskSetManager: Starting task 0.3 in stage 1.0 (TID 8, 
10.71.1.72, partition 0,PROCESS_LOCAL, 3335 bytes)
17/01/23 16:32:12 INFO TaskSetManager: Lost task 1.2 in stage 1.0 (TID 7) on 
executor 10.71.1.72: java.lang.RuntimeException (Failed to read from reader.) 
[duplicate 5]
17/01/23 16:32:12 INFO TaskSetManager: Starting task 1.3 in stage 1.0 (TID 9, 
10.71.1.72, partition 1,PROCESS_LOCAL, 3335 bytes)
17/01/23 16:32:12 INFO TaskSetManager: Lost task 0.3 in stage 1.0 (TID 8) on 
executor 10.71.1.72: java.lang.RuntimeException (Failed to read from reader.) 
[duplicate 6]
17/01/23 16:32:12 ERROR TaskSetManager: Task 0 in stage 1.0 failed 4 times; 
aborting job
17/01/23 16:32:12 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have 
all completed, from pool
17/01/23 16:32:12 INFO TaskSetManager: Lost task 1.3 in stage 1.0 (TID 9) on 
executor 10.71.1.72: java.lang.RuntimeException (Failed to read from reader.) 
[duplicate 7]
17/01/23 16:32:12 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have 
all completed, from pool
17/01/23 16:32:12 INFO TaskSchedulerImpl: Cancelling stage 1
17/01/23 16:32:12 INFO DAGScheduler: ShuffleMapStage 1 (mapToPair at 
GroupCombineFunctions.java:237) failed in 0.192 s
17/01/23 16:32:12 INFO DAGScheduler: Job 1 failed: collect at 
BoundedDataset.java:88, took 0.227427 s
17/01/23 16:32:12 INFO SparkUI: Stopped Spark web UI at http://10.71.1.72:4040
17/01/23 16:32:12 INFO SparkDeploySchedulerBackend: Shutting down all executors
17/01/23 16:32:12 INFO SparkDeploySchedulerBackend: Asking each executor to 
shut down
17/01/23 16:32:12 INFO MapOutputTrackerMasterEndpoint: 
MapOutputTrackerMasterEndpoint stopped!
17/01/23 16:32:12 INFO MemoryStore: MemoryStore cleared
17/01/23 16:32:12 INFO BlockManager: BlockManager stopped
17/01/23 16:32:12 INFO BlockManagerMaster: BlockManagerMaster stopped
17/01/23 16:32:12 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: 
OutputCommitCoordinator stopped!
17/01/23 16:32:12 INFO SparkContext: Successfully stopped SparkContext
Exception in thread "main" 
org.apache.beam.sdk.Pipeline$PipelineExecutionException: 
java.lang.RuntimeException: Failed to read from reader.
        at 
org.apache.beam.runners.spark.SparkPipelineResult.beamExceptionFrom(SparkPipelineResult.java:72)
        at 
org.apache.beam.runners.spark.SparkPipelineResult.waitUntilFinish(SparkPipelineResult.java:112)
        at 
org.apache.beam.runners.spark.SparkPipelineResult.waitUntilFinish(SparkPipelineResult.java:101)
        at org.apache.beam.examples.WordCount.main(WordCount.java:184)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
        at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.RuntimeException: Failed to read from reader.
        at 
org.apache.beam.runners.spark.io.SourceRDD$Bounded$1.hasNext(SourceRDD.java:136)
        at 
scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
        at 
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
        at 
org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:284)
        at 
org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:171)
        at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:78)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:268)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
        at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        at org.apache.spark.scheduler.Task.run(Task.scala:89)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Unable to find handler for 
gs://apache-beam-samples/shakespeare/kinglear.txt
        at 
org.apache.beam.sdk.util.IOChannelUtils.getFactory(IOChannelUtils.java:307)
        at 
org.apache.beam.sdk.io.FileBasedSource$FileBasedReader.startImpl(FileBasedSource.java:528)
        at 
org.apache.beam.sdk.io.OffsetBasedSource$OffsetBasedReader.start(OffsetBasedSource.java:271)
        at 
org.apache.beam.runners.spark.io.SourceRDD$Bounded$1.hasNext(SourceRDD.java:125)
        ... 25 more
17/01/23 16:32:12 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down 
remote daemon.
17/01/23 16:32:12 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon 
shut down; proceeding with flushing remote transports.
17/01/23 16:32:12 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut 
down.
^C17/01/23 16:32:25 INFO ShutdownHookManager: Shutdown hook called
17/01/23 16:32:25 INFO ShutdownHookManager: Deleting directory 
/private/var/folders/7k/mv0jws6j7ws8pnp350h1ndtw0000gn/T/spark-70afbd21-f270-4670-88d6-2895e5f23ea5/httpd-04f079ac-4416-4e74-9dd3-5bf70f70ee8e
17/01/23 16:32:25 INFO ShutdownHookManager: Deleting directory 
/private/var/folders/7k/mv0jws6j7ws8pnp350h1ndtw0000gn/T/spark-70afbd21-f270-4670-88d6-2895e5f23ea5


On Jan 23, 2017, at 4:23 PM, Amit Sela <[email protected]> wrote:

Maybe, are you running on Dataproc ? are you using YARN/Mesos ? do the machines hosting the executor processes have access to GS ? could you paste the entire stack trace ?

On Mon, Jan 23, 2017 at 11:21 PM Chaoran Yu <[email protected]> wrote:
Thank you Amit for the reply,

I just tried two more runners and below is a summary:

DirectRunner: works
FlinkRunner: works in local mode. I got an error “Communication with JobManager failed: lost connection to the JobManager” when running in cluster mode, 
SparkRunner: works in local mode (mvn exec command) but fails in cluster mode (spark-submit) with the error I pasted in the previous email.

In SparkRunner’s case, can it be that Spark executor can’t access gs file in Google Storage?

Thank you,



On Jan 23, 2017, at 3:28 PM, Amit Sela <[email protected]> wrote:

Is this working for you with other runners ? judging by the stack trace, it seems like IOChannelUtils fails to find a handler so it doesn't seem like it is a Spark specific problem. 

On Mon, Jan 23, 2017 at 8:50 PM Chaoran Yu <[email protected]> wrote:
Thank you Amit and JB! 

This is not related to DC/OS itself, but I ran into a problem when launching a Spark job on a cluster with spark-submit. My Spark job written in Beam can’t read the specified gs file. I got the following error:

Caused by: java.io.IOException: Unable to find handler for gs://beam-samples/sample.txt
at org.apache.beam.sdk.util.IOChannelUtils.getFactory(IOChannelUtils.java:307)
at org.apache.beam.sdk.io.FileBasedSource$FileBasedReader.startImpl(FileBasedSource.java:528)
at org.apache.beam.sdk.io.OffsetBasedSource$OffsetBasedReader.start(OffsetBasedSource.java:271)
at org.apache.beam.runners.spark.io.SourceRDD$Bounded$1.hasNext(SourceRDD.java:125)

Then I thought about switching to reading from another source, but I saw in Beam’s documentation that TextIO can only read from files in Google Cloud Storage (prefixed with gs://) when running in cluster mode. How do you guys doing file IO in Beam when using the SparkRunner?


Thank you,
Chaoran


On Jan 22, 2017, at 4:32 AM, Amit Sela <[email protected]> wrote:

I'lll join JB's comment on the Spark runner saying that submitting Beam pipelines using the Spark runner can be done using Spark's spark-submit script, find out more in the Spark runner documentation.

Amit.

On Sun, Jan 22, 2017 at 8:03 AM Jean-Baptiste Onofré <[email protected]> wrote:
Hi,

Not directly DCOS (I think Stephen did some test on it), but I have a
platform running Spark and Flink with Beam on Mesos + Marathon.

It basically doesn't have anything special as running piplines uses
spark-submit (as on in Spark "natively").

Regards
JB

On 01/22/2017 12:56 AM, Chaoran Yu wrote:
> Hello all,
>
>   Has anyone had experience using Beam on DC/OS? I want to run Beam code
>
> executed with Spark runner on DC/OS. As a next step, I would like to run the
>
> Flink runner as well. There doesn't seem to exist any information
> about running
>
> Beam on DC/OS I can find on the web. So some pointers are greatly
> appreciated.
>
> Thank you,
>
> Chaoran Yu
>

--
Jean-Baptiste Onofré
[email protected]
http://blog.nanthrax.net
Talend - http://www.talend.com



Reply via email to