Its very strange. 

I run the same code in Colab and it works fine. Just to ensure its not using a 
local runner - i closed the Flink Connector and then it came back as connection 
refused. When I ran it again bit with now Flink running again - worked as a 
doddle.



On 2020/10/24 22:56:18, Ankur Goenka <goe...@google.com> wrote: 
> Can you try running
> java -jar
> C:\\Users\\rekharamesh/.apache_beam/cache/jars\\beam-runners-flink-1.8-job-server-2.24.0.jar
> --flink-master http://localhost:8081 --artifacts-dir
> C:\\Users\\REKHAR~1\\AppData\\Local\\Temp\\beam-temp4r_emy7q\\artifactskmzxyxkl
> --job-port 57115 --artifact-port 0 --expansion-port 0
> 
> to see why the job server is failing.
> 
> On Sat, Oct 24, 2020 at 3:52 PM Ramesh Mathikumar <meetr...@googlemail.com>
> wrote:
> 
> > Hi Ankur,
> >
> > Thanks for the prompt response. I suspected a similar issue to to validate
> > that I ran a local cluster of Flink. My Parameters are are follows.
> >
> > options = PipelineOptions([
> >     "--runner=FlinkRunner",
> >     "--flink_version=1.8",
> >     "--flink_master=localhost:8081",
> >     "--environment_type=LOOPBACK"
> > ])
> >
> > And when I run it - it does not even reach the cluster - instead it bombs
> > with a following message.
> >
> >
> > WARNING:root:Make sure that locally built Python SDK docker image has
> > Python 3.6 interpreter.
> > ERROR:apache_beam.utils.subprocess_server:Starting job service with
> > ['java', '-jar',
> > 'C:\\Users\\rekharamesh/.apache_beam/cache/jars\\beam-runners-flink-1.8-job-se
> > rver-2.24.0.jar', '--flink-master', 'http://localhost:8081',
> > '--artifacts-dir',
> > 'C:\\Users\\REKHAR~1\\AppData\\Local\\Temp\\beam-temp4r_emy7q\\artifactskmzxyxkl',
> > '--job-port', '57115', '--artifact-port', '0', '--expansion-port', '0']
> > ERROR:apache_beam.utils.subprocess_server:Error bringing up service
> > Traceback (most recent call last):
> >   File
> > "C:\Users\rekharamesh\AppData\Local\Programs\Python\Python36-32\lib\site-packages\apache_beam\utils\subprocess_server.py",
> > line 88, in start
> >     'Service failed to start up with error %s' % self._process.poll())
> > RuntimeError: Service failed to start up with error 1
> > Traceback (most recent call last):
> >   File "SaiStudy - Apache-Beam-Spark.py", line 34, in <module>
> >     | 'Write results' >> beam.io.WriteToText(outputs_prefix)
> >   File
> > "C:\Users\rekharamesh\AppData\Local\Programs\Python\Python36-32\lib\site-packages\apache_beam\pipeline.py",
> > line 555, in __exit__
> >     self.result = self.run()
> >   File
> > "C:\Users\rekharamesh\AppData\Local\Programs\Python\Python36-32\lib\site-packages\apache_beam\pipeline.py",
> > line 534, in run
> >     return self.runner.run_pipeline(self, self._options)
> >   File
> > "C:\Users\rekharamesh\AppData\Local\Programs\Python\Python36-32\lib\site-packages\apache_beam\runners\portability\flink_runner.py",
> > line 49, in run_pipeline
> >
> >     return super(FlinkRunner, self).run_pipeline(pipeline, options)
> >   File
> > "C:\Users\rekharamesh\AppData\Local\Programs\Python\Python36-32\lib\site-packages\apache_beam\runners\portability\portable_runner.py",
> > line 388, in run_pipe
> > line
> >     job_service_handle = self.create_job_service(options)
> >   File
> > "C:\Users\rekharamesh\AppData\Local\Programs\Python\Python36-32\lib\site-packages\apache_beam\runners\portability\portable_runner.py",
> > line 304, in create_j
> > ob_service
> >     return self.create_job_service_handle(server.start(), options)
> >   File
> > "C:\Users\rekharamesh\AppData\Local\Programs\Python\Python36-32\lib\site-packages\apache_beam\runners\portability\job_server.py",
> > line 83, in start
> >     self._endpoint = self._job_server.start()
> >   File
> > "C:\Users\rekharamesh\AppData\Local\Programs\Python\Python36-32\lib\site-packages\apache_beam\runners\portability\job_server.py",
> > line 112, in start
> >     return self._server.start()
> >   File
> > "C:\Users\rekharamesh\AppData\Local\Programs\Python\Python36-32\lib\site-packages\apache_beam\utils\subprocess_server.py",
> > line 88, in start
> >     'Service failed to start up with error %s' % self._process.poll())
> > RuntimeError: Service failed to start up with error 1
> >
> >
> > On 2020/10/24 22:39:20, Ankur Goenka <goe...@google.com> wrote:
> > > Spark running inside docker might require additional configurations.
> > > To simplify things and to make sure Spark on docker is actually the real
> > > issues, I would recommend running Spark natively (not inside docker) on
> > the
> > > host machine and then submitting the pipeline to it.
> > >
> > > On Sat, Oct 24, 2020 at 11:33 AM Ramesh Mathikumar <
> > meetr...@googlemail.com>
> > > wrote:
> > >
> > > > I am running a sample pipeline and my environment is this.
> > > >
> > > > python "SaiStudy - Apache-Beam-Spark.py" --runner=PortableRunner
> > > > --job_endpoint=192.168.99.102:8099
> > > >
> > > > My Spark is running on a Docker Container and I can see that the
> > > > JobService is running at 8099.
> > > >
> > > > I am getting the following error:
> > grpc._channel._MultiThreadedRendezvous:
> > > > <_MultiThreadedRendezvous of RPC that terminated with: status =
> > > > StatusCode.UNAVAILABLE details = "failed to connect to all addresses"
> > > > debug_error_string =
> > > > "{"created":"@1603539936.536000000","description":"Failed to pick
> > > > subchannel","file":"src/core/ext/filters/client_channel/client_chann
> > > >
> > el.cc","file_line":4090,"referenced_errors":[{"created":"@1603539936.536000000","description":"failed
> > > > to connect to all addresses","file":"src/core/ext/filters/cli
> > > >
> > ent_channel/lb_policy/pick_first/pick_first.cc","file_line":394,"grpc_status":14}]}"
> > > >
> > > > When I curl to ip:port, I can see the following error from the docker
> > logs
> > > > Oct 24, 2020 11:34:50 AM
> > > > org.apache.beam.vendor.grpc.v1p26p0.io.grpc.netty.NettyServerTransport
> > > > notifyTerminated INFO: Transport failed
> > > >
> > org.apache.beam.vendor.grpc.v1p26p0.io.netty.handler.codec.http2.Http2Exception:
> > > > Unexpected HTTP/1.x request: GET / at
> > > >
> > org.apache.beam.vendor.grpc.v1p26p0.io.netty.handler.codec.http2.Http2Exception.connectionError(Http2Exception.java:103)
> > > > at
> > > >
> > org.apache.beam.vendor.grpc.v1p26p0.io.netty.handler.codec.http2.Http2ConnectionHandler$PrefaceDecoder.readClientPrefaceString(Http2ConnectionHandler.java:302)
> > > > at
> > > >
> > org.apache.beam.vendor.grpc.v1p26p0.io.netty.handler.codec.http2.Http2ConnectionHandler$PrefaceDecoder.decode(Http2ConnectionHandler.java:239)
> > > > at
> > > >
> > org.apache.beam.vendor.grpc.v1p26p0.io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:438)
> > > > at
> > > >
> > org.apache.beam.vendor.grpc.v1p26p0.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProt
> > > >  ection(ByteToMessageDecoder.java:505) at
> > > >
> > org.apache.beam.vendor.grpc.v1p26p0.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:444)
> > > > at
> > > >
> > org.apache.beam.vendor.grpc.v1p26p0.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:283)
> > > > at
> > > >
> > org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
> > > > at
> > > >
> > org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
> > > > at
> > > >
> > org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
> > > > at
> > > >
> > org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1422)
> > > > at
> > > >
> > org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:37
> > > >  4) at
> > > >
> > org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
> > > > at
> > > >
> > org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:931)
> > > > at
> > > >
> > org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
> > > > at
> > > >
> > org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:700)
> > > > at
> > > >
> > org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:635)
> > > > at
> > > >
> > org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:552)
> > > > at
> > > >
> > org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:514)
> > > > at
> > > >
> > org.apache.beam.vendor.grpc.v1p26p0.io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1044
> > > >  ) at
> > > >
> > org.apache.beam.vendor.grpc.v1p26p0.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
> > > > at
> > > >
> > org.apache.beam.vendor.grpc.v1p26p0.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
> > > > at java.lang.Thread.run(Thread.java:748)
> > > >
> > > > Help Please.
> > > >
> > >
> >
> 

Reply via email to