[ 
https://issues.apache.org/jira/browse/BEAM-6576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Snowberger updated BEAM-6576:
-----------------------------------
    Description: 
Hello, hopefully I'm reporting this in the right location -

 

I have a simple dataflow job that reads from Pub/Sub, does a few transforms, 
and writes to Redis (in another PTransform at the moment, since no Python I/O 
connector). It runs perfectly well for several days, then the elements just 
stop being processed. If I update the job, or stop and start, it carries on 
again for a few more days. I believe I am not the only one to have run into 
this issue, as I found mention on [Stack 
Overflow|https://stackoverflow.com/questions/53610876/dataflow-stops-streaming-to-bigquery-without-errors].
 The only indication of workers quitting I've discovered so far are these info 
logs:

 
{code:java}
[0129/074659:INFO:update_manager-inl.h(52)] ChromeOSPolicy::UpdateCheckAllowed: 
START
[0129/074659:WARNING:evaluation_context-inl.h(43)] Error reading Variable 
update_disabled: "No value set for update_disabled"
[0129/074659:WARNING:evaluation_context-inl.h(43)] Error reading Variable 
release_channel_delegated: "No value set for release_channel_delegated"
[0129/074659:INFO:chromeos_policy.cc(317)] Periodic check interval not 
satisfied, blocking until 1/29/2019 8:26:37 GMT
[0129/074659:INFO:update_manager-inl.h(74)] ChromeOSPolicy::UpdateCheckAllowed: 
END
{code}
I have also filed a Dataflow bug, as I'm not sure where the problem lies. Any 
help resolving this would be very welcome, and If I can provide anything 
further, please let me know. Thank you!

 

 

  was:
Hello, hopefully I'm reporting this in the right location -

 

I have a simple dataflow job that reads from Pub/Sub, does a few transforms, 
and writes to Redis (in another PTransform at the moment, since no Python I/O 
connector). It runs perfectly well for several days, then the elements just 
stop being processed. If I update the job, or stop and start, it carries on 
again for a few more days. I believe I am not the only one to have run into 
this issue, as I found mention on [[Stack 
Overflow|https://stackoverflow.com/questions/53610876/dataflow-stops-streaming-to-bigquery-without-errors]|https://stackoverflow.com/questions/53610876/dataflow-stops-streaming-to-bigquery-without-errors].
 The only indication of workers quitting I've discovered so far are these info 
logs:

 
{code:java}
[0129/074659:INFO:update_manager-inl.h(52)] ChromeOSPolicy::UpdateCheckAllowed: 
START
[0129/074659:WARNING:evaluation_context-inl.h(43)] Error reading Variable 
update_disabled: "No value set for update_disabled"
[0129/074659:WARNING:evaluation_context-inl.h(43)] Error reading Variable 
release_channel_delegated: "No value set for release_channel_delegated"
[0129/074659:INFO:chromeos_policy.cc(317)] Periodic check interval not 
satisfied, blocking until 1/29/2019 8:26:37 GMT
[0129/074659:INFO:update_manager-inl.h(74)] ChromeOSPolicy::UpdateCheckAllowed: 
END
{code}
I have also filed a Dataflow bug, as I'm not sure where the problem lies. Any 
help resolving this would be very welcome, and If I can provide anything 
further, please let me know. Thank you!

 

 


> Python SDK, DataflowRunner streaming job stops without error.
> -------------------------------------------------------------
>
>                 Key: BEAM-6576
>                 URL: https://issues.apache.org/jira/browse/BEAM-6576
>             Project: Beam
>          Issue Type: Bug
>          Components: runner-dataflow
>    Affects Versions: 2.9.0
>            Reporter: David Snowberger
>            Assignee: Tyler Akidau
>            Priority: Major
>
> Hello, hopefully I'm reporting this in the right location -
>  
> I have a simple dataflow job that reads from Pub/Sub, does a few transforms, 
> and writes to Redis (in another PTransform at the moment, since no Python I/O 
> connector). It runs perfectly well for several days, then the elements just 
> stop being processed. If I update the job, or stop and start, it carries on 
> again for a few more days. I believe I am not the only one to have run into 
> this issue, as I found mention on [Stack 
> Overflow|https://stackoverflow.com/questions/53610876/dataflow-stops-streaming-to-bigquery-without-errors].
>  The only indication of workers quitting I've discovered so far are these 
> info logs:
>  
> {code:java}
> [0129/074659:INFO:update_manager-inl.h(52)] 
> ChromeOSPolicy::UpdateCheckAllowed: START
> [0129/074659:WARNING:evaluation_context-inl.h(43)] Error reading Variable 
> update_disabled: "No value set for update_disabled"
> [0129/074659:WARNING:evaluation_context-inl.h(43)] Error reading Variable 
> release_channel_delegated: "No value set for release_channel_delegated"
> [0129/074659:INFO:chromeos_policy.cc(317)] Periodic check interval not 
> satisfied, blocking until 1/29/2019 8:26:37 GMT
> [0129/074659:INFO:update_manager-inl.h(74)] 
> ChromeOSPolicy::UpdateCheckAllowed: END
> {code}
> I have also filed a Dataflow bug, as I'm not sure where the problem lies. Any 
> help resolving this would be very welcome, and If I can provide anything 
> further, please let me know. Thank you!
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to