Re: Flink 1.1.0 : Hadoop 1/2 compatibility mismatch

2016-08-10 Thread Shannon Carey
Works for me, thanks!



-Shannon


RE: Flink 1.1.0 : Hadoop 1/2 compatibility mismatch

2016-08-10 Thread LINZ, Arnaud
Hi,
Good for me ; my unit tests all passed with this rc version.
Thanks,
Arnaud

-Message d'origine-
De : Ufuk Celebi [mailto:u...@apache.org] 
Envoyé : mardi 9 août 2016 18:33
À : Ufuk Celebi <u...@apache.org>
Cc : user@flink.apache.org; d...@flink.apache.org
Objet : Re: Flink 1.1.0 : Hadoop 1/2 compatibility mismatch

I've started a vote for 1.1.1 containing hopefully fixed artifacts. If you have 
any spare time, would you mind checking whether it fixes your problem?

The artifacts are here: http://home.apache.org/~uce/flink-1.1.1-rc1/

You would have to add the following repository to your Maven project and update 
the Flink version to 1.1.1:



flink-rc
flink-rc
https://repository.apache.org/content/repositories/orgapacheflink-1101

true


false




Would really appreciate it!


On Tue, Aug 9, 2016 at 2:11 PM, Ufuk Celebi <u...@apache.org> wrote:
> As noted in the other thread, this is a problem with the Maven 
> artifacts of 1.1.0 :-( I've added a warning to the release note and 
> will start a emergency vote for 1.1.1 which only updates the Maven 
> artifacts.
>
> On Tue, Aug 9, 2016 at 9:45 AM, LINZ, Arnaud <al...@bouyguestelecom.fr> wrote:
>> Hello,
>>
>>
>>
>> I’ve switched to 1.1.0, but part of my code doesn’t work any longer.
>>
>>
>>
>> Despite the fact that I have no Hadoop 1 jar in my dependencies 
>> (2.7.1 clients & flink-hadoop-compatibility_2.10 1.1.0), I have a 
>> weird JobContext version mismatch error, that I was unable to understand.
>>
>>
>>
>> Code is a hive table read in a local batch flink cluster using a M/R 
>> job (from good package mapreduce, not mapred).
>>
>>
>>
>> import org.apache.hadoop.mapreduce.InputFormat;
>>
>> import org.apache.hadoop.mapreduce.Job;
>>
>> import org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormat;
>>
>> (…)
>>
>> final Job job = Job.getInstance();
>>
>> final InputFormat<NullWritable, DefaultHCatRecord> 
>> hCatInputFormat =
>> (InputFormat) HCatInputFormat.setInput(job, table.getDbName(), 
>> table.getTableName(), filter);
>>
>>
>>
>> final HadoopInputFormat<NullWritable, DefaultHCatRecord> 
>> inputFormat = new HadoopInputFormat<NullWritable,
>>
>> DefaultHCatRecord>(hCatInputFormat, NullWritable.class, 
>> DefaultHCatRecord.class,  job);
>>
>>
>>
>>
>>
>> final HCatSchema inputSchema = 
>> HCatInputFormat.getTableSchema(job.getConfiguration());
>>
>> return cluster
>>
>> .createInput(inputFormat)
>>
>> .flatMap(new RichFlatMapFunction<Tuple2<NullWritable,
>> DefaultHCatRecord>, T>() {
>>
>> @Override
>>
>> public void flatMap(Tuple2<NullWritable,
>> DefaultHCatRecord> value,
>>
>> Collector out) throws Exception { // NOPMD
>>
>> (...)
>>
>> }
>>
>> }).returns(beanClass);
>>
>>
>>
>>
>>
>> Exception is :
>>
>> org.apache.flink.runtime.client.JobExecutionException: Failed to 
>> submit job
>> 69dba7e4d79c05d2967dca4d4a27cf38 (Flink Java Job at Tue Aug 09 
>> 09:19:41 CEST
>> 2016)
>>
>> at
>> org.apache.flink.runtime.jobmanager.JobManager.org$apache$flink$runti
>> me$jobmanager$JobManager$$submitJob(JobManager.scala:1281)
>>
>> at
>> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage
>> $1.applyOrElse(JobManager.scala:478)
>>
>> at
>> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractP
>> artialFunction.scala:33)
>>
>> at
>> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFu
>> nction.scala:33)
>>
>> at
>> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFu
>> nction.scala:25)
>>
>> at
>> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$
>> 1.applyOrElse(LeaderSessionMessageFilter.scala:36)
>>
>> at
>> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractP
>> artialFunction.scala:33)
>>
>> at
>> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFu
>> nction.scala:33)
>>
>> at
>> scala.runtime.Abstract

Re: Flink 1.1.0 : Hadoop 1/2 compatibility mismatch

2016-08-09 Thread Ufuk Celebi
I've started a vote for 1.1.1 containing hopefully fixed artifacts. If
you have any spare time, would you mind checking whether it fixes your
problem?

The artifacts are here: http://home.apache.org/~uce/flink-1.1.1-rc1/

You would have to add the following repository to your Maven project
and update the Flink version to 1.1.1:



flink-rc
flink-rc
https://repository.apache.org/content/repositories/orgapacheflink-1101

true


false




Would really appreciate it!


On Tue, Aug 9, 2016 at 2:11 PM, Ufuk Celebi  wrote:
> As noted in the other thread, this is a problem with the Maven
> artifacts of 1.1.0 :-( I've added a warning to the release note and
> will start a emergency vote for 1.1.1 which only updates the Maven
> artifacts.
>
> On Tue, Aug 9, 2016 at 9:45 AM, LINZ, Arnaud  wrote:
>> Hello,
>>
>>
>>
>> I’ve switched to 1.1.0, but part of my code doesn’t work any longer.
>>
>>
>>
>> Despite the fact that I have no Hadoop 1 jar in my dependencies (2.7.1
>> clients & flink-hadoop-compatibility_2.10 1.1.0), I have a weird JobContext
>> version mismatch error, that I was unable to understand.
>>
>>
>>
>> Code is a hive table read in a local batch flink cluster using a M/R job
>> (from good package mapreduce, not mapred).
>>
>>
>>
>> import org.apache.hadoop.mapreduce.InputFormat;
>>
>> import org.apache.hadoop.mapreduce.Job;
>>
>> import org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormat;
>>
>> (…)
>>
>> final Job job = Job.getInstance();
>>
>> final InputFormat hCatInputFormat =
>> (InputFormat) HCatInputFormat.setInput(job, table.getDbName(),
>> table.getTableName(), filter);
>>
>>
>>
>> final HadoopInputFormat inputFormat
>> = new HadoopInputFormat>
>> DefaultHCatRecord>(hCatInputFormat, NullWritable.class,
>> DefaultHCatRecord.class,  job);
>>
>>
>>
>>
>>
>> final HCatSchema inputSchema =
>> HCatInputFormat.getTableSchema(job.getConfiguration());
>>
>> return cluster
>>
>> .createInput(inputFormat)
>>
>> .flatMap(new RichFlatMapFunction> DefaultHCatRecord>, T>() {
>>
>> @Override
>>
>> public void flatMap(Tuple2> DefaultHCatRecord> value,
>>
>> Collector out) throws Exception { // NOPMD
>>
>> (...)
>>
>> }
>>
>> }).returns(beanClass);
>>
>>
>>
>>
>>
>> Exception is :
>>
>> org.apache.flink.runtime.client.JobExecutionException: Failed to submit job
>> 69dba7e4d79c05d2967dca4d4a27cf38 (Flink Java Job at Tue Aug 09 09:19:41 CEST
>> 2016)
>>
>> at
>> org.apache.flink.runtime.jobmanager.JobManager.org$apache$flink$runtime$jobmanager$JobManager$$submitJob(JobManager.scala:1281)
>>
>> at
>> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1.applyOrElse(JobManager.scala:478)
>>
>> at
>> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>>
>> at
>> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>>
>> at
>> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>>
>> at
>> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:36)
>>
>> at
>> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>>
>> at
>> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>>
>> at
>> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>>
>> at
>> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
>>
>> at
>> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
>>
>> at
>> scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
>>
>> at
>> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
>>
>> at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
>>
>> at
>> org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:121)
>>
>> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>>
>> at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>>
>> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
>>
>> at akka.dispatch.Mailbox.run(Mailbox.scala:221)
>>
>> at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
>>
>> at
>> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>>
>> at
>> 

Re: Flink 1.1.0 : Hadoop 1/2 compatibility mismatch

2016-08-09 Thread Ufuk Celebi
As noted in the other thread, this is a problem with the Maven
artifacts of 1.1.0 :-( I've added a warning to the release note and
will start a emergency vote for 1.1.1 which only updates the Maven
artifacts.

On Tue, Aug 9, 2016 at 9:45 AM, LINZ, Arnaud  wrote:
> Hello,
>
>
>
> I’ve switched to 1.1.0, but part of my code doesn’t work any longer.
>
>
>
> Despite the fact that I have no Hadoop 1 jar in my dependencies (2.7.1
> clients & flink-hadoop-compatibility_2.10 1.1.0), I have a weird JobContext
> version mismatch error, that I was unable to understand.
>
>
>
> Code is a hive table read in a local batch flink cluster using a M/R job
> (from good package mapreduce, not mapred).
>
>
>
> import org.apache.hadoop.mapreduce.InputFormat;
>
> import org.apache.hadoop.mapreduce.Job;
>
> import org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormat;
>
> (…)
>
> final Job job = Job.getInstance();
>
> final InputFormat hCatInputFormat =
> (InputFormat) HCatInputFormat.setInput(job, table.getDbName(),
> table.getTableName(), filter);
>
>
>
> final HadoopInputFormat inputFormat
> = new HadoopInputFormat
> DefaultHCatRecord>(hCatInputFormat, NullWritable.class,
> DefaultHCatRecord.class,  job);
>
>
>
>
>
> final HCatSchema inputSchema =
> HCatInputFormat.getTableSchema(job.getConfiguration());
>
> return cluster
>
> .createInput(inputFormat)
>
> .flatMap(new RichFlatMapFunction DefaultHCatRecord>, T>() {
>
> @Override
>
> public void flatMap(Tuple2 DefaultHCatRecord> value,
>
> Collector out) throws Exception { // NOPMD
>
> (...)
>
> }
>
> }).returns(beanClass);
>
>
>
>
>
> Exception is :
>
> org.apache.flink.runtime.client.JobExecutionException: Failed to submit job
> 69dba7e4d79c05d2967dca4d4a27cf38 (Flink Java Job at Tue Aug 09 09:19:41 CEST
> 2016)
>
> at
> org.apache.flink.runtime.jobmanager.JobManager.org$apache$flink$runtime$jobmanager$JobManager$$submitJob(JobManager.scala:1281)
>
> at
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1.applyOrElse(JobManager.scala:478)
>
> at
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>
> at
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>
> at
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>
> at
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:36)
>
> at
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>
> at
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>
> at
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>
> at
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
>
> at
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
>
> at
> scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
>
> at
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
>
> at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
>
> at
> org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:121)
>
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>
> at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
>
> at akka.dispatch.Mailbox.run(Mailbox.scala:221)
>
> at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
>
> at
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>
> at
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>
> at
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>
> at
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>
> Caused by: org.apache.flink.runtime.JobException: Creating the input splits
> caused an error: Found interface org.apache.hadoop.mapreduce.JobContext, but
> class was expected
>
> at
> org.apache.flink.runtime.executiongraph.ExecutionJobVertex.(ExecutionJobVertex.java:172)
>
> at
> org.apache.flink.runtime.executiongraph.ExecutionGraph.attachJobGraph(ExecutionGraph.java:695)

Flink 1.1.0 : Hadoop 1/2 compatibility mismatch

2016-08-09 Thread LINZ, Arnaud
Hello,

I’ve switched to 1.1.0, but part of my code doesn’t work any longer.

Despite the fact that I have no Hadoop 1 jar in my dependencies (2.7.1 clients 
& flink-hadoop-compatibility_2.10 1.1.0), I have a weird JobContext version 
mismatch error, that I was unable to understand.

Code is a hive table read in a local batch flink cluster using a M/R job (from 
good package mapreduce, not mapred).

import org.apache.hadoop.mapreduce.InputFormat;
import org.apache.hadoop.mapreduce.Job;
import org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormat;
(…)
final Job job = Job.getInstance();
final InputFormat hCatInputFormat = 
(InputFormat) HCatInputFormat.setInput(job, table.getDbName(), 
table.getTableName(), filter);

final HadoopInputFormat inputFormat = 
new HadoopInputFormat(hCatInputFormat, NullWritable.class, 
DefaultHCatRecord.class,  job);


final HCatSchema inputSchema = 
HCatInputFormat.getTableSchema(job.getConfiguration());
return cluster
.createInput(inputFormat)
.flatMap(new RichFlatMapFunction, T>() {
@Override
public void flatMap(Tuple2 
value,
Collector out) throws Exception { // NOPMD
(...)
}
}).returns(beanClass);


Exception is :
org.apache.flink.runtime.client.JobExecutionException: Failed to submit job 
69dba7e4d79c05d2967dca4d4a27cf38 (Flink Java Job at Tue Aug 09 09:19:41 CEST 
2016)
at 
org.apache.flink.runtime.jobmanager.JobManager.org$apache$flink$runtime$jobmanager$JobManager$$submitJob(JobManager.scala:1281)
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1.applyOrElse(JobManager.scala:478)
at 
scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
at 
scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
at 
scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
at 
org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:36)
at 
scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
at 
scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
at 
scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
at 
org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
at 
org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
at 
scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
at 
org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
at 
org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:121)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
at akka.actor.ActorCell.invoke(ActorCell.scala:487)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
at akka.dispatch.Mailbox.run(Mailbox.scala:221)
at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
at 
scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: org.apache.flink.runtime.JobException: Creating the input splits 
caused an error: Found interface org.apache.hadoop.mapreduce.JobContext, but 
class was expected
at 
org.apache.flink.runtime.executiongraph.ExecutionJobVertex.(ExecutionJobVertex.java:172)
at 
org.apache.flink.runtime.executiongraph.ExecutionGraph.attachJobGraph(ExecutionGraph.java:695)
at 
org.apache.flink.runtime.jobmanager.JobManager.org$apache$flink$runtime$jobmanager$JobManager$$submitJob(JobManager.scala:1178)
... 23 more
Caused by: java.lang.IncompatibleClassChangeError: Found interface 
org.apache.hadoop.mapreduce.JobContext, but class was expected
at 
org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.createInputSplits(HadoopInputFormatBase.java:158)
at