Thanks! I did not see the znode and thus did not paste the ls...anywaz will
get you the full JM log ASAP
On Thu, Jun 28, 2018, 5:35 PM Gary Yao wrote:
> Hi Vishal,
>
> The znode /flink_test/da_15/leader/rest_server_lock should exist as long
> as your
> Flink 1.5 cluster is running. In 1.4
Hi Vishal,
The znode /flink_test/da_15/leader/rest_server_lock should exist as long as
your
Flink 1.5 cluster is running. In 1.4 this znode will not be created. Are you
sure that the znode does not exist? Unfortunately you only attached the
output
of "ls /flink_test/da_15".
Can you share the
I am not seeing rest_server_lock. Is it transient ( ephemeral znode ) for
the duration of the cli command ?
[zk: localhost:2181(CONNECTED) 2] ls /flink_test/da_15
[jobgraphs, leader, checkpoints, leaderlatch, checkpoint-counter]
The logs say
2018-06-28 14:02:56 INFO
Chesnay,
Do you have rough idea of the 1.5.1 timeline?
Thanks,
--
Christophe
On Mon, Jun 25, 2018 at 4:22 PM, Chesnay Schepler
wrote:
> The watermark issue is know and will be fixed in 1.5.1
>
>
> On 25.06.2018 15:03, Vishal Santoshi wrote:
>
> Thank you
>
> One addition
>
> I do not see
Ok, I will check.
On Tue, Jun 26, 2018, 12:39 PM Gary Yao wrote:
> Hi Vishal,
>
> You should check the contents of znode /flink_test/[...]/rest_server_lock
> to see
> if the URL is correct.
>
> The host and port should be logged by the RestClient [1]. If you do not
> see the
> message "Sending
Hi Vishal,
You should check the contents of znode /flink_test/[...]/rest_server_lock
to see
if the URL is correct.
The host and port should be logged by the RestClient [1]. If you do not see
the
message "Sending request of class [...]]" on DEBUG level, probably the
client is
not able to get the
The leader znode is the right one ( it is a binary )
get
/flink_test/da_15/leader//job_manager_lock
wFDakka.tcp://
fl...@flink-9edd15d7.bf2.tumblr.net:22161/user/jobmanagersrjava.util.UUIDm/J
leastSigBitsJ
OK few things
2018-06-26 13:31:29 INFO CliFrontend:282 - Starting Command Line Client
(Version: 1.5.0, Rev:c61b108, Date:24.05.2018 @ 14:54:44 UTC)
...
2018-06-26 13:31:31 INFO ClientCnxn:876 - Socket connection established to
zk-f1fb95b9.bf2.tumblr.net/10.246.218.17:2181, initiating session
By the way, in HA set up.
> 在 2018年6月26日,下午5:39,zhangminglei <18717838...@163.com> 写道:
>
> Hi, Gary Yao
>
> Once I discovered that there was a change in the ip address[
> jobmanager.rpc.address ]. From 10.208.73.129 to localhost. I think that will
> cause the issue. What do you think ?
>
>
Hi, Gary Yao
Once I discovered that there was a change in the ip address[
jobmanager.rpc.address ]. From 10.208.73.129 to localhost. I think that will
cause the issue. What do you think ?
Cheers
Minglei
> 在 2018年6月26日,下午4:53,Gary Yao 写道:
>
> Hi Vishal,
>
> Could it be that you are not
Hi Vishal,
Could it be that you are not using the 1.5.0 client? The stacktrace you
posted
does not reference valid lines of code in the release-1.5.0-rc6 tag.
If you have a HA setup, the host and port of the leading JM will be looked
up
from ZooKeeper before job submission. Therefore, the
I think all I need to add is
web.port: 8081
rest.port: 8081
to the JM flink conf ?
On Mon, Jun 25, 2018 at 10:46 AM, Vishal Santoshi wrote:
> Another issue I saw with flink cli...
>
> org.apache.flink.client.program.ProgramInvocationException: The program
> execution failed: JobManager did
Another issue I saw with flink cli...
org.apache.flink.client.program.ProgramInvocationException: The program
execution failed: JobManager did not respond within 12 ms
at
org.apache.flink.client.program.ClusterClient.runDetached(ClusterClient.java:524)
at
The watermark issue is know and will be fixed in 1.5.1
On 25.06.2018 15:03, Vishal Santoshi wrote:
Thank you
One addition
I do not see WM info on the UI ( Attached )
Is this a know issue. The same pipe on our production has the WM ( In
fact never had an issue with Watermarks not
Hi Vishal,
1. I don't think a rolling update is possible. Flink 1.5.0 changed the
process orchestration and how they communicate. IMO, the way to go is to
start a Flink 1.5.0 cluster, take a savepoint on the running job, start
from the savepoint on the new cluster and shut the old job down.
2.
15 matches
Mail list logo