I've tested several configurations (also changing my compilation to 1.7 but
then sesame 4 was causing the error [1]):

   1. Flink compiled with java 1.7 (default), runned within Eclipse with
   Java 8: OK
   2. Flink compiled with java 1.7 (default), runned the cluster with java
   8: not able to run my job compiled with java 1.8 and causing the reported
   exception (unsupported major.minor version)
   3. Flink compiled with java 1.8: not able to compile without the
   reported modifications, but then the job was running fine

I don't know if you ever tested all those configurations but I'm sure it
wasn't working when deployed in the cluster.

[1] http://rdf4j.org/doc/4/release-notes/4.0.0.docbook?view

On Thu, Feb 4, 2016 at 11:40 AM, Stephan Ewen <se...@apache.org> wrote:

> Hi!
>
> I am running Java 8 for a year without an issue. The code is compiled for
> target Java 7, but can be run with Java 8.
> User code that is targeted for Java 8 can be run if Flink is run with Java
> 8.
>
> The initial error you got was because you probably compiled with Java 8 as
> the target, and ran it with Java 7.
>
> I would just leave the target to be 1.7 and run it in a Java 8 JVM. User
> code can also be Java 8, that mixes seamlessly.
>
> Stephan
>
>
> On Thu, Feb 4, 2016 at 11:34 AM, Flavio Pompermaier <pomperma...@okkam.it>
> wrote:
>
>> Anyone looking into this? Java 7 reached its end of life at april 2015
>> with its last public update (numer 80) and the ability to run Java 8 jobs
>> would be more and more important in the future. IMHO, the default target of
>> the maven compiler plugin should be set to 1.8 in the 1.0 release. In most
>> of the cases this would be backward compatible and if it's not you can
>> always recompile it with 1.7 (but as an exception this time).
>> Obviously this is not urgent, I just wanted to point this out and
>> hopefully help someone else facing the same problem
>>
>> Best,
>> Flavio
>>
>>
>> On Wed, Feb 3, 2016 at 3:40 PM, Flavio Pompermaier <pomperma...@okkam.it>
>> wrote:
>>
>>> I've fixed it changing the copy method in the *TupleSerializer* as
>>> follow:
>>>
>>> @Override
>>> public T copy(T from, T reuse) {
>>> for (int i = 0; i < arity; i++) {
>>> Object copy = fieldSerializers[i].copy(from.getField(i));
>>> reuse.setField(copy, i);
>>> }
>>> return reuse;
>>> }
>>>
>>> And commenting line 50 in *CollectionExecutionAccumulatorsTest*:
>>>
>>> assertEquals(NUM_ELEMENTS,
>>> result.getAccumulatorResult(ACCUMULATOR_NAME));
>>>
>>> I hope it helps..
>>>
>>> On Wed, Feb 3, 2016 at 3:12 PM, Flavio Pompermaier <pomperma...@okkam.it
>>> > wrote:
>>>
>>>> I've checked the compiled classes with javap -verbose and indeed they
>>>> had a major.verion=51 (java 7).
>>>> So I've changed the source and target to 1.8 in the main pom.xm and now
>>>> the generated .class have major.verion=52.
>>>> Unfortunately now I get this error:
>>>>
>>>> [ERROR]
>>>> /opt/flink-src/flink-java/src/main/java/org/apache/flink/api/java/typeutils/runtime/TupleSerializer.java:[104,63]
>>>> incompatible types: void cannot be converted to java.lang.Object
>>>>
>>>> How can I fix it? I also tried to upgrade the maven compiler to 3.5 but
>>>> it didn't help :(
>>>>
>>>> Best,
>>>> Flavio
>>>>
>>>> On Wed, Feb 3, 2016 at 2:38 PM, Flavio Pompermaier <
>>>> pomperma...@okkam.it> wrote:
>>>>
>>>>> Hi to all,
>>>>>
>>>>> I was trying to make my Java 8 application to run on a Flink 0.10.1
>>>>> cluster.
>>>>> I've compiled both Flink sources and my app with the same Java version
>>>>> (1.8.72) and I've set the env.java.home to point to my java 8 JVM in every
>>>>> flink-conf.yml of the cluster.
>>>>>
>>>>> I always get the following Exception:
>>>>>
>>>>> java.lang.UnsupportedClassVersionError: XXX: Unsupported major.minor
>>>>> version 52.0
>>>>>
>>>>> Is there any other setting I forgot to check? Do I have to change also
>>>>> the source and target to 1.8 in the maven compiler settings of the main 
>>>>> pom?
>>>>>
>>>>> Best,
>>>>> Flavio
>>>>>
>>>>
>>>>
>>>>
>>>
>

Reply via email to