[ 
https://issues.apache.org/jira/browse/BEAM-4292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16519908#comment-16519908
 ] 

Henning Rohde commented on BEAM-4292:
-------------------------------------

I tried to run a simplified version of your example and it fails with:

```
java.lang.RuntimeException: java.lang.ClassCastException: 
com.google.cloud.dataflow.worker.BatchModeExecutionContext cannot be cast to 
com.google.cloud.dataflow.worker.StreamingModeExecutionContext
        at 
com.google.cloud.dataflow.worker.BeamFnMapTaskExecutorFactory$4.typedApply(BeamFnMapTaskExecutorFactory.java:442)
        at 
com.google.cloud.dataflow.worker.BeamFnMapTaskExecutorFactory$4.typedApply(BeamFnMapTaskExecutorFactory.java:413)
        at 
com.google.cloud.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:63)
        at 
com.google.cloud.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:50)
        at 
com.google.cloud.dataflow.worker.graph.Networks.replaceDirectedNetworkNodes(Networks.java:87)
        at 
com.google.cloud.dataflow.worker.BeamFnMapTaskExecutorFactory.create(BeamFnMapTaskExecutorFactory.java:175)
        at 
com.google.cloud.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:336)
        at 
com.google.cloud.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:290)
        at 
com.google.cloud.dataflow.worker.DataflowRunnerHarness.start(DataflowRunnerHarness.java:179)
        at 
com.google.cloud.dataflow.worker.DataflowRunnerHarness.main(DataflowRunnerHarness.java:107)
Caused by: java.lang.ClassCastException: 
com.google.cloud.dataflow.worker.BatchModeExecutionContext cannot be cast to 
com.google.cloud.dataflow.worker.StreamingModeExecutionContext
        at 
com.google.cloud.dataflow.worker.PubsubSink$Factory.create(PubsubSink.java:111)
        at 
com.google.cloud.dataflow.worker.PubsubSink$Factory.create(PubsubSink.java:84)
        at 
com.google.cloud.dataflow.worker.SinkRegistry.create(SinkRegistry.java:103)
        at 
com.google.cloud.dataflow.worker.SinkRegistry.create(SinkRegistry.java:37)
        at 
com.google.cloud.dataflow.worker.BeamFnMapTaskExecutorFactory.createWriteOperation(BeamFnMapTaskExecutorFactory.java:483)
        at 
com.google.cloud.dataflow.worker.BeamFnMapTaskExecutorFactory$4.typedApply(BeamFnMapTaskExecutorFactory.java:429)
        ... 9 more
```

It looks like Dataflow supports pubsub only in streaming mode. I would say that 
the simplest path forward is just not write #1 as a pipeline, but just read 
using the Beam filesystem and publish to pubsub using the API similarly to how 
streaming_wordcap works.

> Add streaming wordcount example
> -------------------------------
>
>                 Key: BEAM-4292
>                 URL: https://issues.apache.org/jira/browse/BEAM-4292
>             Project: Beam
>          Issue Type: Improvement
>          Components: sdk-go
>            Reporter: Henning Rohde
>            Assignee: James Wilson
>            Priority: Major
>         Attachments: publish_wordline.go
>
>
> It is referenced on the Beam website as part of the Wordcount progression:
> https://beam.apache.org/get-started/wordcount-example/#streamingwordcount-example



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to