[
https://issues.apache.org/jira/browse/FLINK-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16377267#comment-16377267
]
ASF GitHub Bot commented on FLINK-8787:
---------------------------------------
GitHub user GJL opened a pull request:
https://github.com/apache/flink/pull/5584
[FLINK-8787][flip6] WIP
WIP
## What is the purpose of the change
*(For example: This pull request makes task deployment go through the blob
server, rather than through RPC. That way we avoid re-transferring them on each
deployment (during recovery).)*
cc: @tillrohrmann
## Brief change log
*(for example:)*
- *The TaskInfo is stored in the blob store on job creation time as a
persistent artifact*
- *Deployments RPC transmits only the blob storage reference*
- *TaskManagers retrieve the TaskInfo from the blob cache*
## Verifying this change
*(Please pick either of the following options)*
This change is a trivial rework / code cleanup without any test coverage.
*(or)*
This change is already covered by existing tests, such as *(please describe
tests)*.
*(or)*
This change added tests and can be verified as follows:
*(example:)*
- *Added integration tests for end-to-end deployment with large payloads
(100MB)*
- *Extended integration test for recovery after master (JobManager)
failure*
- *Added test that validates that TaskInfo is transferred only once
across recoveries*
- *Manually verified the change by running a 4 node cluser with 2
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one
JobManager and two TaskManagers during the execution, verifying that recovery
happens correctly.*
## Does this pull request potentially affect one of the following parts:
- Dependencies (does it add or upgrade a dependency): (yes / no)
- The public API, i.e., is any changed class annotated with
`@Public(Evolving)`: (yes / no)
- The serializers: (yes / no / don't know)
- The runtime per-record code paths (performance sensitive): (yes / no /
don't know)
- Anything that affects deployment or recovery: JobManager (and its
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / no / don't know)
- The S3 file system connector: (yes / no / don't know)
## Documentation
- Does this pull request introduce a new feature? (yes / no)
- If yes, how is the feature documented? (not applicable / docs /
JavaDocs / not documented)
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/GJL/flink FLINK-8787
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/flink/pull/5584.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #5584
----
commit 0cde09add17106f09e1f44b2a73400ea14a9eb21
Author: gyao <gary@...>
Date: 2018-02-26T17:52:18Z
[hotfix] Add requireNonNull validation to Configuration copy constructor
commit 8f826c673bafd8228bb25da20e7dd40384fae971
Author: gyao <gary@...>
Date: 2018-02-26T17:54:34Z
[FLINK-8787][flip6] Ensure that zk namespace configuration reaches
RestClusterClient
----
> Deploying FLIP-6 YARN session with HA fails
> -------------------------------------------
>
> Key: FLINK-8787
> URL: https://issues.apache.org/jira/browse/FLINK-8787
> Project: Flink
> Issue Type: Bug
> Components: Client, YARN
> Affects Versions: 1.5.0
> Environment: emr-5.12.0
> Hadoop distribution: Amazon 2.8.3
> Applications: ZooKeeper 3.4.10
> Reporter: Gary Yao
> Assignee: Gary Yao
> Priority: Blocker
> Labels: flip-6
> Fix For: 1.5.0
>
>
> Starting a YARN session with HA in FLIP-6 mode fails with an exception.
> Commit: 5e3fa4403f518dd6d3fe9970fe8ca55871add7c9
> Command to start YARN session:
> {noformat}
> export HADOOP_CLASSPATH=`hadoop classpath`
> HADOOP_CONF_DIR=/etc/hadoop/conf bin/yarn-session.sh -d -n 1 -s 1 -jm 2048
> -tm 2048
> {noformat}
> Stacktrace:
> {noformat}
> java.lang.reflect.UndeclaredThrowableException
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1854)
> at
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
> at
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:790)
> Caused by: org.apache.flink.util.FlinkException: Could not write the Yarn
> connection information.
> at
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:612)
> at
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.lambda$main$2(FlinkYarnSessionCli.java:790)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
> ... 2 more
> Caused by: org.apache.flink.runtime.leaderretrieval.LeaderRetrievalException:
> Could not retrieve the leader address and leader session ID.
> at
> org.apache.flink.runtime.util.LeaderRetrievalUtils.retrieveLeaderConnectionInfo(LeaderRetrievalUtils.java:116)
> at
> org.apache.flink.client.program.rest.RestClusterClient.getClusterConnectionInfo(RestClusterClient.java:405)
> at
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:589)
> ... 6 more
> Caused by: java.util.concurrent.TimeoutException: Futures timed out after
> [60000 milliseconds]
> at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:223)
> at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:227)
> at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
> at
> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
> at scala.concurrent.Await$.result(package.scala:190)
> at scala.concurrent.Await.result(package.scala)
> at
> org.apache.flink.runtime.util.LeaderRetrievalUtils.retrieveLeaderConnectionInfo(LeaderRetrievalUtils.java:114)
> ... 8 more
> ------------------------------------------------------------
> The program finished with the following exception:
> java.lang.reflect.UndeclaredThrowableException
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1854)
> at
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
> at
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:790)
> Caused by: org.apache.flink.util.FlinkException: Could not write the Yarn
> connection information.
> at
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:612)
> at
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.lambda$main$2(FlinkYarnSessionCli.java:790)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
> ... 2 more
> Caused by: org.apache.flink.runtime.leaderretrieval.LeaderRetrievalException:
> Could not retrieve the leader address and leader session ID.
> at
> org.apache.flink.runtime.util.LeaderRetrievalUtils.retrieveLeaderConnectionInfo(LeaderRetrievalUtils.java:116)
> at
> org.apache.flink.client.program.rest.RestClusterClient.getClusterConnectionInfo(RestClusterClient.java:405)
> at
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:589)
> ... 6 more
> Caused by: java.util.concurrent.TimeoutException: Futures timed out after
> [60000 milliseconds]
> at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:223)
> at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:227)
> at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
> at
> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
> at scala.concurrent.Await$.result(package.scala:190)
> at scala.concurrent.Await.result(package.scala)
> at
> org.apache.flink.runtime.util.LeaderRetrievalUtils.retrieveLeaderConnectionInfo(LeaderRetrievalUtils.java:114)
> ... 8 more
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)