Re: Flink Yarn Deployment Issue - 1.7.0

2018-12-10 Thread sohimankotia
can anyone pls help ?? 



--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/


Re: Flink Yarn Deployment Issue - 1.7.0

2018-12-10 Thread sohimankotia
Anyone can help ?? 



--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/


Re: Flink Yarn Deployment Issue - 1.7.0

2018-12-09 Thread sohi mankotia
Hi Jorn,

There are no more logs . Attaching yarn aggregated logs for first problem .
For second one job is not even getting submitted.

- Sohi

On Sun, Dec 9, 2018 at 2:13 PM Jörn Franke  wrote:

> Can you check the Flink log files? You should get there a better
> description of the error.
>
> > Am 08.12.2018 um 18:15 schrieb sohimankotia :
> >
> > Hi ,
> >
> > I have installed flink-1.7.0 Hadoop 2.7 scala 2.11 .  We are using
> > hortonworks hadoop distribution.(hdp/2.6.1.0-129/)
> >
> > *Flink lib folder looks like :*
> >
> >
> > -rw-r--r-- 1 hdfs hadoop 93184216 Nov 29 02:15 flink-dist_2.11-1.7.0.jar
> > -rw-r--r-- 1 hdfs hadoop79219 Nov 29 03:33
> > flink-hadoop-compatibility_2.11-1.7.0.jar
> > -rw-r--r-- 1 hdfs hadoop   141881 Nov 29 02:13
> flink-python_2.11-1.7.0.jar
> > -rw-r--r-- 1 hdfs hadoop   489884 Nov 28 23:01 log4j-1.2.17.jar
> > -rw-r--r-- 1 hdfs hadoop 9931 Nov 28 23:01 slf4j-log4j12-1.7.15.j
> >
> > *My code :*
> >
> >   ExecutionEnvironment env =
> > ExecutionEnvironment.getExecutionEnvironment();
> >
> >   String p = args[0];
> >
> >
> >   Job job = Job.getInstance();
> >   SequenceFileInputFormat inputFormat = new
> > SequenceFileInputFormat<>();
> >
> > job.getConfiguration().setBoolean(FileInputFormat.INPUT_DIR_RECURSIVE,
> > true);
> >   final HadoopInputFormat hInputEvents =
> > HadoopInputs.readHadoopFile(inputFormat, Text.class,
> BytesWritable.class, p,
> > job);
> >   org.apache.flink.configuration.Configuration fileReadConfig = new
> > org.apache.flink.configuration.Configuration();
> >
> >   env.createInput(hInputEvents)
> >   .output(new PrintingOutputFormat<>());
> >
> >
> > *pom.xml*
> >
> > flink.version = 1.7.0
> >
> >
> >  org.apache.flink
> >  flink-java
> >  ${flink.version}
> >  provided
> >
> >
> >  org.apache.flink
> >  flink-clients_2.11
> >  ${flink.version}
> >  provided
> >
> >
> >  org.apache.flink
> >  flink-streaming-java_2.11
> >  ${flink.version}
> >  provided
> >
> >
> >
> >  org.apache.flink
> >  flink-hadoop-compatibility_2.11
> >  ${flink.version}
> >  provided
> >
> >
> >
> >  org.apache.flink
> >  flink-shaded-hadoop2
> >  ${flink.version}
> >  provided
> >
> >
> > *
> > in script :*
> >
> >
> >
> > export HADOOP_CONF_DIR=/etc/hadoop/conf
> > export HADOOP_CLASSPATH="/usr/hdp/2.6.1.0-129/hadoop/hadoop-*":`hadoop
> > classpath`
> >
> > echo ${HADOOP_CLASSPATH}
> >
> > PARALLELISM=1
> > JAR_PATH="jar"
> > CLASS_NAME="CLASS_NAME"
> > NODES=1
> > SLOTS=1
> > MEMORY_PER_NODE=2048
> > QUEUE="default"
> > NAME="sample"
> >
> > IN="input-file-path"
> >
> >
> > /home/hdfs/flink-1.7.0/bin/flink run -m yarn-cluster  -yn ${NODES} -yqu
> > ${QUEUE} -ys ${SLOTS} -ytm ${MEMORY_PER_NODE} --parallelism
> ${PARALLELISM}
> > -ynm ${NAME} -c ${CLASS_NAME} ${JAR_PATH} ${IN}
> >
> >
> > *where classpath is printing:*
> >
> >
> /usr/hdp/2.6.1.0-129/hadoop/hadoop-*:/usr/hdp/2.6.1.0-129/hadoop/conf:/usr/hdp/2.6.1.0-129/hadoop/lib/*:/usr/hdp/2.6.1.0-129/hadoop/.//*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/./:/usr/hdp/2.6.1.0-129/hadoop-hdfs/lib/*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/.//*:/usr/hdp/2.6.1.0-129/hadoop-yarn/lib/*:/usr/hdp/2.6.1.0-129/hadoop-yarn/.//*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/lib/*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/.//*:/usr/hdp/
> 2.6.1.
> 0-129/hadoop/conf:/usr/hdp/2.6.1.0-129/hadoop/lib/*:/usr/hdp/2.6.1.0-129/hadoop/.//*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/./:/usr/hdp/2.6.1.0-129/hadoop-hdfs/lib/*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/.//*:/usr/hdp/2.6.1.0-129/hadoop-yarn/lib/*:/usr/hdp/2.6.1.0-129/hadoop-yarn/.//*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/lib/*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/.//*::mysql-connector-java-5.1.17.jar:mysql-connector-java.jar:/usr/hdp/2.6.1.0-129/tez/*:/usr/hdp/2.6.1.0-129/tez/lib/*:/usr/hdp/2.6.1.0-129/tez/conf:mysql-connector-java-5.1.17.jar:mysql-connector-java.jar:/usr/hdp/2.6.1.0-129/tez/*:/usr/hdp/2.6.1.0-129/tez/lib/*:/usr/hdp/2.6.1.0-129/tez/conf
> >
> >
> > But I am getting class not found error for hadoop related jar . Error is
> > attached .
> >
> >
> > error.txt
> > <
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/file/t894/error.txt>
>
> > *Another Problem :*
> >
> > If i added hadoop shaded jar in lib folder
> >
> >
> > -rw-r--r-- 1 hdfs hadoop 93184216 Nov 29 02:15 flink-dist_2.11-1.7.0.jar
> > -rw-r--r-- 1 hdfs hadoop79219 Nov 29 03:33
> > flink-hadoop-compatibility_2.11-1.7.0.jar
> > -rw-r--r-- 1 hdfs hadoop   141881 Nov 29 02:13
> flink-python_2.11-1.7.0.jar
> > *-rw-r--r-- 1 hdfs hadoop 41130742 Dec  8 22:38
> > flink-shaded-hadoop2-uber-1.7.0.jar*
> > -rw-r--r-- 1 hdfs hadoop   489884 Nov 28 23:01 log4j-1.2.17.jar
> > -rw-r--r-- 1 hdfs hadoop 9931 Nov 28 23:01 slf4j-log4j12-1.7.15.jar
> >
> > I am getting following error. And this is happening for all version
> greater
> > than 1.4.2 .
> >
> > 

Re: Flink Yarn Deployment Issue - 1.7.0

2018-12-09 Thread Jörn Franke
Can you check the Flink log files? You should get there a better description of 
the error.

> Am 08.12.2018 um 18:15 schrieb sohimankotia :
> 
> Hi ,
> 
> I have installed flink-1.7.0 Hadoop 2.7 scala 2.11 .  We are using
> hortonworks hadoop distribution.(hdp/2.6.1.0-129/)
> 
> *Flink lib folder looks like :*
> 
> 
> -rw-r--r-- 1 hdfs hadoop 93184216 Nov 29 02:15 flink-dist_2.11-1.7.0.jar
> -rw-r--r-- 1 hdfs hadoop79219 Nov 29 03:33
> flink-hadoop-compatibility_2.11-1.7.0.jar
> -rw-r--r-- 1 hdfs hadoop   141881 Nov 29 02:13 flink-python_2.11-1.7.0.jar
> -rw-r--r-- 1 hdfs hadoop   489884 Nov 28 23:01 log4j-1.2.17.jar
> -rw-r--r-- 1 hdfs hadoop 9931 Nov 28 23:01 slf4j-log4j12-1.7.15.j
> 
> *My code :*
> 
>   ExecutionEnvironment env =
> ExecutionEnvironment.getExecutionEnvironment();
> 
>   String p = args[0];
> 
> 
>   Job job = Job.getInstance();
>   SequenceFileInputFormat inputFormat = new
> SequenceFileInputFormat<>();
> 
> job.getConfiguration().setBoolean(FileInputFormat.INPUT_DIR_RECURSIVE,
> true);
>   final HadoopInputFormat hInputEvents =
> HadoopInputs.readHadoopFile(inputFormat, Text.class, BytesWritable.class, p,
> job);
>   org.apache.flink.configuration.Configuration fileReadConfig = new
> org.apache.flink.configuration.Configuration();
> 
>   env.createInput(hInputEvents)
>   .output(new PrintingOutputFormat<>());
> 
> 
> *pom.xml*
> 
> flink.version = 1.7.0
> 
>
>  org.apache.flink
>  flink-java
>  ${flink.version}
>  provided
>
>
>  org.apache.flink
>  flink-clients_2.11
>  ${flink.version}
>  provided
>
>
>  org.apache.flink
>  flink-streaming-java_2.11
>  ${flink.version}
>  provided
>
> 
>
>  org.apache.flink
>  flink-hadoop-compatibility_2.11
>  ${flink.version}
>  provided
>
> 
>
>  org.apache.flink
>  flink-shaded-hadoop2
>  ${flink.version}
>  provided
>
> 
> *
> in script :*
> 
> 
> 
> export HADOOP_CONF_DIR=/etc/hadoop/conf
> export HADOOP_CLASSPATH="/usr/hdp/2.6.1.0-129/hadoop/hadoop-*":`hadoop
> classpath`
> 
> echo ${HADOOP_CLASSPATH}
> 
> PARALLELISM=1
> JAR_PATH="jar"
> CLASS_NAME="CLASS_NAME"
> NODES=1
> SLOTS=1
> MEMORY_PER_NODE=2048
> QUEUE="default"
> NAME="sample"
> 
> IN="input-file-path"
> 
> 
> /home/hdfs/flink-1.7.0/bin/flink run -m yarn-cluster  -yn ${NODES} -yqu
> ${QUEUE} -ys ${SLOTS} -ytm ${MEMORY_PER_NODE} --parallelism ${PARALLELISM}
> -ynm ${NAME} -c ${CLASS_NAME} ${JAR_PATH} ${IN} 
> 
> 
> *where classpath is printing:*
> 
> /usr/hdp/2.6.1.0-129/hadoop/hadoop-*:/usr/hdp/2.6.1.0-129/hadoop/conf:/usr/hdp/2.6.1.0-129/hadoop/lib/*:/usr/hdp/2.6.1.0-129/hadoop/.//*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/./:/usr/hdp/2.6.1.0-129/hadoop-hdfs/lib/*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/.//*:/usr/hdp/2.6.1.0-129/hadoop-yarn/lib/*:/usr/hdp/2.6.1.0-129/hadoop-yarn/.//*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/lib/*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/.//*:/usr/hdp/2.6.1.0-129/hadoop/conf:/usr/hdp/2.6.1.0-129/hadoop/lib/*:/usr/hdp/2.6.1.0-129/hadoop/.//*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/./:/usr/hdp/2.6.1.0-129/hadoop-hdfs/lib/*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/.//*:/usr/hdp/2.6.1.0-129/hadoop-yarn/lib/*:/usr/hdp/2.6.1.0-129/hadoop-yarn/.//*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/lib/*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/.//*::mysql-connector-java-5.1.17.jar:mysql-connector-java.jar:/usr/hdp/2.6.1.0-129/tez/*:/usr/hdp/2.6.1.0-129/tez/lib/*:/usr/hdp/2.6.1.0-129/tez/conf:mysql-connector-java-5.1.17.jar:mysql-connector-java.jar:/usr/hdp/2.6.1.0-129/tez/*:/usr/hdp/2.6.1.0-129/tez/lib/*:/usr/hdp/2.6.1.0-129/tez/conf
> 
> 
> But I am getting class not found error for hadoop related jar . Error is
> attached .
> 
> 
> error.txt
> 
>   
> *Another Problem :*
> 
> If i added hadoop shaded jar in lib folder
> 
> 
> -rw-r--r-- 1 hdfs hadoop 93184216 Nov 29 02:15 flink-dist_2.11-1.7.0.jar
> -rw-r--r-- 1 hdfs hadoop79219 Nov 29 03:33
> flink-hadoop-compatibility_2.11-1.7.0.jar
> -rw-r--r-- 1 hdfs hadoop   141881 Nov 29 02:13 flink-python_2.11-1.7.0.jar
> *-rw-r--r-- 1 hdfs hadoop 41130742 Dec  8 22:38
> flink-shaded-hadoop2-uber-1.7.0.jar*
> -rw-r--r-- 1 hdfs hadoop   489884 Nov 28 23:01 log4j-1.2.17.jar
> -rw-r--r-- 1 hdfs hadoop 9931 Nov 28 23:01 slf4j-log4j12-1.7.15.jar
> 
> I am getting following error. And this is happening for all version greater
> than 1.4.2 .
> 
> java.lang.IllegalAccessError: tried to access method
> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.getProxyInternal()Ljava/lang/Object;
> from class
> org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider
>at
> org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider.init(RequestHedgingRMFailoverProxyProvider.java:75)
>at
> org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:163)
>

Flink Yarn Deployment Issue - 1.7.0

2018-12-08 Thread sohimankotia
Hi ,

I have installed flink-1.7.0 Hadoop 2.7 scala 2.11 .  We are using
hortonworks hadoop distribution.(hdp/2.6.1.0-129/)

*Flink lib folder looks like :*


-rw-r--r-- 1 hdfs hadoop 93184216 Nov 29 02:15 flink-dist_2.11-1.7.0.jar
-rw-r--r-- 1 hdfs hadoop79219 Nov 29 03:33
flink-hadoop-compatibility_2.11-1.7.0.jar
-rw-r--r-- 1 hdfs hadoop   141881 Nov 29 02:13 flink-python_2.11-1.7.0.jar
-rw-r--r-- 1 hdfs hadoop   489884 Nov 28 23:01 log4j-1.2.17.jar
-rw-r--r-- 1 hdfs hadoop 9931 Nov 28 23:01 slf4j-log4j12-1.7.15.j

*My code :*

   ExecutionEnvironment env =
ExecutionEnvironment.getExecutionEnvironment();

   String p = args[0];


   Job job = Job.getInstance();
   SequenceFileInputFormat inputFormat = new
SequenceFileInputFormat<>();
  
job.getConfiguration().setBoolean(FileInputFormat.INPUT_DIR_RECURSIVE,
true);
   final HadoopInputFormat hInputEvents =
HadoopInputs.readHadoopFile(inputFormat, Text.class, BytesWritable.class, p,
job);
   org.apache.flink.configuration.Configuration fileReadConfig = new
org.apache.flink.configuration.Configuration();

   env.createInput(hInputEvents)
   .output(new PrintingOutputFormat<>());


*pom.xml*

flink.version = 1.7.0


  org.apache.flink
  flink-java
  ${flink.version}
  provided


  org.apache.flink
  flink-clients_2.11
  ${flink.version}
  provided


  org.apache.flink
  flink-streaming-java_2.11
  ${flink.version}
  provided



  org.apache.flink
  flink-hadoop-compatibility_2.11
  ${flink.version}
  provided



  org.apache.flink
  flink-shaded-hadoop2
  ${flink.version}
  provided


*
in script :*



export HADOOP_CONF_DIR=/etc/hadoop/conf
export HADOOP_CLASSPATH="/usr/hdp/2.6.1.0-129/hadoop/hadoop-*":`hadoop
classpath`

echo ${HADOOP_CLASSPATH}

PARALLELISM=1
JAR_PATH="jar"
CLASS_NAME="CLASS_NAME"
NODES=1
SLOTS=1
MEMORY_PER_NODE=2048
QUEUE="default"
NAME="sample"

IN="input-file-path"


/home/hdfs/flink-1.7.0/bin/flink run -m yarn-cluster  -yn ${NODES} -yqu
${QUEUE} -ys ${SLOTS} -ytm ${MEMORY_PER_NODE} --parallelism ${PARALLELISM}
-ynm ${NAME} -c ${CLASS_NAME} ${JAR_PATH} ${IN} 


*where classpath is printing:*

/usr/hdp/2.6.1.0-129/hadoop/hadoop-*:/usr/hdp/2.6.1.0-129/hadoop/conf:/usr/hdp/2.6.1.0-129/hadoop/lib/*:/usr/hdp/2.6.1.0-129/hadoop/.//*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/./:/usr/hdp/2.6.1.0-129/hadoop-hdfs/lib/*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/.//*:/usr/hdp/2.6.1.0-129/hadoop-yarn/lib/*:/usr/hdp/2.6.1.0-129/hadoop-yarn/.//*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/lib/*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/.//*:/usr/hdp/2.6.1.0-129/hadoop/conf:/usr/hdp/2.6.1.0-129/hadoop/lib/*:/usr/hdp/2.6.1.0-129/hadoop/.//*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/./:/usr/hdp/2.6.1.0-129/hadoop-hdfs/lib/*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/.//*:/usr/hdp/2.6.1.0-129/hadoop-yarn/lib/*:/usr/hdp/2.6.1.0-129/hadoop-yarn/.//*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/lib/*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/.//*::mysql-connector-java-5.1.17.jar:mysql-connector-java.jar:/usr/hdp/2.6.1.0-129/tez/*:/usr/hdp/2.6.1.0-129/tez/lib/*:/usr/hdp/2.6.1.0-129/tez/conf:mysql-connector-java-5.1.17.jar:mysql-connector-java.jar:/usr/hdp/2.6.1.0-129/tez/*:/usr/hdp/2.6.1.0-129/tez/lib/*:/usr/hdp/2.6.1.0-129/tez/conf


But I am getting class not found error for hadoop related jar . Error is
attached .


error.txt

  
*Another Problem :*

If i added hadoop shaded jar in lib folder


-rw-r--r-- 1 hdfs hadoop 93184216 Nov 29 02:15 flink-dist_2.11-1.7.0.jar
-rw-r--r-- 1 hdfs hadoop79219 Nov 29 03:33
flink-hadoop-compatibility_2.11-1.7.0.jar
-rw-r--r-- 1 hdfs hadoop   141881 Nov 29 02:13 flink-python_2.11-1.7.0.jar
*-rw-r--r-- 1 hdfs hadoop 41130742 Dec  8 22:38
flink-shaded-hadoop2-uber-1.7.0.jar*
-rw-r--r-- 1 hdfs hadoop   489884 Nov 28 23:01 log4j-1.2.17.jar
-rw-r--r-- 1 hdfs hadoop 9931 Nov 28 23:01 slf4j-log4j12-1.7.15.jar

I am getting following error. And this is happening for all version greater
than 1.4.2 .

java.lang.IllegalAccessError: tried to access method
org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.getProxyInternal()Ljava/lang/Object;
from class
org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider
at
org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider.init(RequestHedgingRMFailoverProxyProvider.java:75)
at
org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:163)
at org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:94)
at
org.apache.hadoop.yarn.client.ClientRMProxy.createRMProxy(ClientRMProxy.java:72)
at
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:187)
at
org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at