[jira] [Commented] (FLINK-3725) Exception in thread "main" scala.MatchError: ... (of class scala.Tuple4)

2016-07-22 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389770#comment-15389770
 ] 

Stephan Ewen commented on FLINK-3725:
-

This looks like a strange classpath issue - probably multiple conflicting 
shaded versions of Guava (of my).

That issue is fixed in 1.1 - no dependency on shaded Guava at that point any 
more.



> Exception in thread "main" scala.MatchError: ... (of class scala.Tuple4)
> 
>
> Key: FLINK-3725
> URL: https://issues.apache.org/jira/browse/FLINK-3725
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Affects Versions: 1.0.1
> Environment: \# java -version
> openjdk version "1.8.0_77"
> OpenJDK Runtime Environment (build 1.8.0_77-b03)
> OpenJDK 64-Bit Server VM (build 25.77-b03, mixed mode)
>Reporter: Maxim Dobryakov
>
> When I start standalone cluster with `bin/jobmanager.sh start cluster` 
> command all works fine but then I am using the same command for HA cluster 
> the JobManager raise error and stop:
> *log/flink--jobmanager-0-example-app-1.example.local.out*
> {code}
> Exception in thread "main" scala.MatchError: ({blob.server.port=6130, 
> state.backend.fs.checkpointdir=s3://s3.example.com/example_staging_flink/checkpoints,
>  blob.storage.directory=/flink/data/blob_storage, jobmanager.heap.mb=1024, 
> fs.s3.impl=org.apache.hadoop.fs.s3.S3FileSystem, 
> restart-strategy.fixed-delay.attempts=2, recovery.mode=zookeeper, 
> jobmanager.web.port=8081, taskmanager.memory.preallocate=false, 
> jobmanager.rpc.port=0, flink.base.dir.path=/flink/conf/.., 
> recovery.zookeeper.storageDir=s3://s3.example.com/example_staging_flink/recovery,
>  taskmanager.tmp.dirs=/flink/data/task_manager, 
> restart-strategy.fixed-delay.delay=60s, taskmanager.data.port=6121, 
> recovery.zookeeper.path.root=/example_staging/flink, parallelism.default=4, 
> taskmanager.numberOfTaskSlots=4, 
> recovery.zookeeper.quorum=zookeeper-1.example.local:2181,zookeeper-2.example.local:2181,zookeeper-3.example.local:2181,
>  fs.hdfs.hadoopconf=/flink/conf, state.backend=filesystem, 
> restart-strategy=none, recovery.jobmanager.port=6123, 
> taskmanager.heap.mb=2048},CLUSTER,null,org.apache.flink.shaded.com.google.common.collect.Iterators$5@3bf7ca37)
>  (of class scala.Tuple4)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$.main(JobManager.scala:1605)
> at org.apache.flink.runtime.jobmanager.JobManager.main(JobManager.scala)
> {code}
> *log/flink--jobmanager-0-example-app-1.example.local.log*
> {code}
> 2016-04-11 10:58:31,680 DEBUG 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory  - field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
> sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[Rate of 
> successful kerberos logins and latency (milliseconds)])
> 2016-04-11 10:58:31,696 DEBUG 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory  - field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
> sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[Rate of failed 
> kerberos logins and latency (milliseconds)])
> 2016-04-11 10:58:31,697 DEBUG 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory  - field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
> sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[GetGroups])
> 2016-04-11 10:58:31,699 DEBUG 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl - UgiMetrics, 
> User and group related metrics
> 2016-04-11 10:58:31,951 DEBUG org.apache.hadoop.util.Shell
>   - Failed to detect a valid hadoop home directory
> java.io.IOException: HADOOP_HOME or hadoop.home.dir are not set.
> at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:303)
> at org.apache.hadoop.util.Shell.(Shell.java:328)
> at org.apache.hadoop.util.StringUtils.(StringUtils.java:80)
> at 
> org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:611)
> at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:272)
> at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
> at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:790)
> at 
> 

[jira] [Commented] (FLINK-3725) Exception in thread "main" scala.MatchError: ... (of class scala.Tuple4)

2016-06-08 Thread Ankit Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321682#comment-15321682
 ] 

Ankit Chaudhary commented on FLINK-3725:


I did have this issue with my setup but after including Guava jar in the flink 
classpath, this issue was gone. It looks like Jobmanager requires Guava for 
this class - org.apache.flink.shaded.com.google.common.collect.Iterators

This might still be a valid bug since looks like the plan is to remove guava 
dependency from Flink (as mentioned in FLINK-3821 & several other related 
tickets).

> Exception in thread "main" scala.MatchError: ... (of class scala.Tuple4)
> 
>
> Key: FLINK-3725
> URL: https://issues.apache.org/jira/browse/FLINK-3725
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Affects Versions: 1.0.1
> Environment: \# java -version
> openjdk version "1.8.0_77"
> OpenJDK Runtime Environment (build 1.8.0_77-b03)
> OpenJDK 64-Bit Server VM (build 25.77-b03, mixed mode)
>Reporter: Maxim Dobryakov
>
> When I start standalone cluster with `bin/jobmanager.sh start cluster` 
> command all works fine but then I am using the same command for HA cluster 
> the JobManager raise error and stop:
> *log/flink--jobmanager-0-example-app-1.example.local.out*
> {code}
> Exception in thread "main" scala.MatchError: ({blob.server.port=6130, 
> state.backend.fs.checkpointdir=s3://s3.example.com/example_staging_flink/checkpoints,
>  blob.storage.directory=/flink/data/blob_storage, jobmanager.heap.mb=1024, 
> fs.s3.impl=org.apache.hadoop.fs.s3.S3FileSystem, 
> restart-strategy.fixed-delay.attempts=2, recovery.mode=zookeeper, 
> jobmanager.web.port=8081, taskmanager.memory.preallocate=false, 
> jobmanager.rpc.port=0, flink.base.dir.path=/flink/conf/.., 
> recovery.zookeeper.storageDir=s3://s3.example.com/example_staging_flink/recovery,
>  taskmanager.tmp.dirs=/flink/data/task_manager, 
> restart-strategy.fixed-delay.delay=60s, taskmanager.data.port=6121, 
> recovery.zookeeper.path.root=/example_staging/flink, parallelism.default=4, 
> taskmanager.numberOfTaskSlots=4, 
> recovery.zookeeper.quorum=zookeeper-1.example.local:2181,zookeeper-2.example.local:2181,zookeeper-3.example.local:2181,
>  fs.hdfs.hadoopconf=/flink/conf, state.backend=filesystem, 
> restart-strategy=none, recovery.jobmanager.port=6123, 
> taskmanager.heap.mb=2048},CLUSTER,null,org.apache.flink.shaded.com.google.common.collect.Iterators$5@3bf7ca37)
>  (of class scala.Tuple4)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$.main(JobManager.scala:1605)
> at org.apache.flink.runtime.jobmanager.JobManager.main(JobManager.scala)
> {code}
> *log/flink--jobmanager-0-example-app-1.example.local.log*
> {code}
> 2016-04-11 10:58:31,680 DEBUG 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory  - field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
> sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[Rate of 
> successful kerberos logins and latency (milliseconds)])
> 2016-04-11 10:58:31,696 DEBUG 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory  - field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
> sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[Rate of failed 
> kerberos logins and latency (milliseconds)])
> 2016-04-11 10:58:31,697 DEBUG 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory  - field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
> sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[GetGroups])
> 2016-04-11 10:58:31,699 DEBUG 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl - UgiMetrics, 
> User and group related metrics
> 2016-04-11 10:58:31,951 DEBUG org.apache.hadoop.util.Shell
>   - Failed to detect a valid hadoop home directory
> java.io.IOException: HADOOP_HOME or hadoop.home.dir are not set.
> at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:303)
> at org.apache.hadoop.util.Shell.(Shell.java:328)
> at org.apache.hadoop.util.StringUtils.(StringUtils.java:80)
> at 
> org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:611)
> at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:272)
> at 
>