nitinlkoin1984 opened a new issue, #23932:
URL: https://github.com/apache/beam/issues/23932
### What happened?
I have deployed a Spark v3.1.2 cluster on kubernetes. My beam job server and
beam sdk container are running on 2 separate linux virtual machines. The
following keeps executing and does not stop
`op = PipelineOptions([
"--runner=PortableRunner",
"--job_endpoint=localhost:8099",
"--environment_type=EXTERNAL",
"--environment_config=vm2-hostname::50000",
"--artifact_endpoint=localhost:8098"
]
)`
`with beam.Pipeline(options=op) as p:
p | beam.Create([1, 2, 3, 10]) | beam.Map(lambda x: x+1) |
beam.Map(print)`
The docker logs for the sdk container show the following error"
_Starting worker with command ['/opt/apache/beam/boot', '--id=1-1',
'--logging_endpoint=localhost:43541', '--artifact_endpoint=localhost:44461',
'--provision_endpoint=localhost:43225', '--control_endpoint=localhost:35475']
E1101 22:11:16.787804592 22 fork_posix.cc:76] Other threads
are currently calling into gRPC, skipping fork() handlers
2022/11/01 22:13:16 Failed to obtain provisioning information: failed to
dial server at localhost:43225
caused by:
context deadline exceeded_
### Issue Priority
Priority: 3
### Issue Component
Component: sdk-py-harness
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]