Re: Flink CLI properties with HA

2018-07-18 Thread Sampath Bhat
te: > >> Hi Sampath, >> >> It seems Flink CLI for standalone would not access >> *high-availability.storageDir.* >> >> What's the exception stack trace in your environment? >> >> Thanks, vino. >> >> 2018-07-17 15:08 GMT+08:

Re: Flink CLI properties with HA

2018-07-17 Thread Sampath Bhat
ailability.storageDir* is storage > (Job graph, checkpoint and so on). Actually, the real data is stored under > this path which used to recover purpose, zookeeper just store a state > handle. > > --- > Thanks. > vino. > > > 2018-07-16 15:28 GMT+08:00 Sampath Bhat : >

Fwd: Flink CLI properties with HA

2018-07-16 Thread Sampath Bhat
-- Forwarded message -- From: Sampath Bhat Date: Fri, Jul 13, 2018 at 3:18 PM Subject: Flink CLI properties with HA To: user Hello When HA is enabled in the flink cluster and if I've to submit job via flink CLI then in the flink-conf.yaml of flink CLI should contain

Flink CLI properties with HA

2018-07-13 Thread Sampath Bhat
Hello When HA is enabled in the flink cluster and if I've to submit job via flink CLI then in the flink-conf.yaml of flink CLI should contain this properties - high-availability: zookeeper high-availability.cluster-id: flink high-availability.zookeeper.path.root: flink

Re: Checkpointing in Flink 1.5.0

2018-07-10 Thread Sampath Bhat
Chesnay - Why is the absolute file check required in the RocksDBStateBackend.setDbStoragePaths(String ... paths). I think this is causing the issue. Its not related to GlusterFS or file system. The same problem can be reproduced with the following configuration on local machine. The flink

Re: Breakage in Flink CLI in 1.5.0

2018-06-21 Thread Sampath Bhat
ill fall > back to use the jobmanager.rpc.address. Currently, the rest server endpoint > runs in the same JVM as the cluster entrypoint and all JobMasters. > > Cheers, > Till > > On Thu, Jun 21, 2018 at 8:46 AM Sampath Bhat > wrote: > >> Hello Till >> >> Th

Re: Breakage in Flink CLI in 1.5.0

2018-06-21 Thread Sampath Bhat
t;>> > at java.net.InetAddress.getAllByName(InetAddress.java:1126) >>> > at java.net.InetAddress.getByName(InetAddress.java:1076) >>> > at >>> > org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils. >>> getRpcUrl(AkkaRpcServiceUtils.java:171) >>> > at >&

Re: Breakage in Flink CLI in 1.5.0

2018-06-20 Thread Sampath Bhat
8 at 11:18 AM, Sampath Bhat wrote: > Hi Chesnay > > If REST API (i.e. the web server) is mandatory for submitting jobs then > why is there an option to set rest.port to -1? I think it should be > mandatory to set some valid port for rest.port and make sure flink job > manager do

Re: Breakage in Flink CLI in 1.5.0

2018-06-19 Thread Sampath Bhat
ore, the rpc > address is still *required *due to some technical implementations; it may > be that you can set this to some arbitrary value however. > > As a result the REST API (i.e. the web server) must be running in order to > submit jobs. > > > On 19.06.2018 14:12, Sampath

Breakage in Flink CLI in 1.5.0

2018-06-19 Thread Sampath Bhat
Hello I'm using Flink 1.5.0 version and Flink CLI to submit jar to flink cluster. In flink 1.4.2 only job manager rpc address and job manager rpc port were sufficient for flink client to connect to job manager and submit the job. But in flink 1.5.0 the flink client additionally requires the

Clarity on Flink 1.5 Rescale mechanism

2018-06-12 Thread Sampath Bhat
Hello In flink 1.5 release notes - https://flink.apache.org/news/2018/05/25/release-1.5.0.html#release-notes Various Other Features and Improvements: Applications can be rescaled without manually triggering a savepoint. Under the hood, Flink will still take a savepoint, stop the application, and

Re: How to submit two Flink jobs to the same Flink cluster?

2018-06-12 Thread Sampath Bhat
Hi Angelica You can run any number of flink jobs in flink cluster. There is no restriction as such until and unless there are issues with flink jobs resource sharing(Ex : two jobs accessing same port). On Tue, Jun 12, 2018 at 5:03 AM, Angelica wrote: > I have a Flink Standalone Cluster based

Re: Retaining uploaded job jars on Flink HA restarts on Kubernetes

2018-05-07 Thread Sampath Bhat
Hi Rohil You need not upload the jar again when job manager restarts in an HA environment. Only the the jar stored in web.upload.dir will be deleted which is fine. The jars needed for the job manager to restart will be stored in high-availability.storageDir along with job graphs and job related

Re: Assign JIRA issue permission

2018-05-07 Thread Sampath Bhat
Thank you for your reply. On Mon, May 7, 2018 at 9:02 AM, Tzu-Li (Gordon) Tai wrote: > Hi Sampath, > > Do you already have a target JIRA that you would like to work on? > > Once you have one, let us know the JIRA issue ID and your JIRA account ID, > then we'll assign you

Assign JIRA issue permission

2018-04-27 Thread Sampath Bhat
Hello I would like to know the procedure for assigning the jira issue. How can I assign it to myself? Thanks

Re: jobmanager rpc inside kubernetes

2018-04-27 Thread Sampath Bhat
It would be helpful if you provide the complete CLI logs. Because even I'm using flink run command to submit jobs to flink jobmanager running on K8s and its working fine. For remote execution using flink CLI you should provide flink-conf.yaml file which contains job manager address, port and

Re: Flink Client job submission through SSL

2018-04-06 Thread Sampath Bhat
s > Shuyi > > On Thu, Apr 5, 2018 at 11:37 PM, Sampath Bhat <sam414255p...@gmail.com> > wrote: > >> Hello >> >> I would like to know if the job submission through flink command line say >> ./bin/flink run >> can be authenticated. Like if SSL is e

Flink Client job submission through SSL

2018-04-06 Thread Sampath Bhat
Hello I would like to know if the job submission through flink command line say ./bin/flink run can be authenticated. Like if SSL is enabled then will the job submission require SSL certificates. But I don't see any behavior as such. Simple flink run is able to submit the job even if SSL is

Re: SSL config on Kubernetes - Dynamic IP

2018-03-28 Thread Sampath Bhat
Hi Edward, You can use this parameter in flink-conf.yaml to supress the hostname checking in certificates. If it suits your purpose. security.ssl.verify-hostname: false Secondly even I'm running flink 1.4 on K8s, I used to get the same error stack trace as you mentioned, while the blob client

Re: Flink web UI authentication

2018-03-19 Thread Sampath Bhat
arameter > "web.access-control-allow-origin", I am not aware of anything like > username/password authentication. Chesnay (cc'd) may know more about > future plans. > You can, however, wrap a proxy like squid around the web UI if you need > this. > > > Regards >

Flink web UI authentication

2018-03-13 Thread Sampath Bhat
Hello I would like to know if flink supports any user level authentication like username/password for flink web ui. Regards Sampath S