Till mentioned the fact that 'spilling on disk' was managed through
exception catch. The last serialization error was related to bad management
of Kryo buffer that was not cleaned after spilling on exception management.
Is it possible we are dealing with an issue similar to this but caused by
another exception managed differently?

saluti,
Stefano


2016-05-23 18:44 GMT+02:00 Flavio Pompermaier <pomperma...@okkam.it>:

> You can try with this:
>
> import org.apache.flink.api.java.ExecutionEnvironment;
> import org.joda.time.DateTime;
>
> import de.javakaffee.kryoserializers.jodatime.JodaDateTimeSerializer;
>
> public class DateTimeError {
>
>     public static void main(String[] args) throws Exception {
>         ExecutionEnvironment env =
> ExecutionEnvironment.getExecutionEnvironment();
> //        env.registerTypeWithKryoSerializer(DateTime.class,
> JodaDateTimeSerializer.class);
>         env.fromElements(DateTime.now(), DateTime.now()).print();
>     }
> }
>
> Without the commented row you get:
>
> Exception in thread "main" java.lang.NullPointerException
>     at
> org.joda.time.tz.CachedDateTimeZone.getInfo(CachedDateTimeZone.java:143)
>     at
> org.joda.time.tz.CachedDateTimeZone.getOffset(CachedDateTimeZone.java:103)
>     at
> org.joda.time.format.DateTimeFormatter.printTo(DateTimeFormatter.java:722)
>     at
> org.joda.time.format.DateTimeFormatter.printTo(DateTimeFormatter.java:535)
>     at
> org.joda.time.format.DateTimeFormatter.print(DateTimeFormatter.java:671)
>     at
> org.joda.time.base.AbstractInstant.toString(AbstractInstant.java:424)
>     at
> org.joda.time.base.AbstractDateTime.toString(AbstractDateTime.java:314)
>     at java.lang.String.valueOf(String.java:2994)
>     at java.io.PrintStream.println(PrintStream.java:821)
>     at org.apache.flink.api.java.DataSet.print(DataSet.java:1607)
>
> Thanks for the support,
> Flavio
>
> On Mon, May 23, 2016 at 4:17 PM, Maximilian Michels <m...@apache.org>
> wrote:
>
>> What error do you get when you don't register the Kryo serializer?
>>
>> On Mon, May 23, 2016 at 11:57 AM, Flavio Pompermaier
>> <pomperma...@okkam.it> wrote:
>> > With this last settings I was able to terminate the job the second time
>> I
>> > retried to run it, without restarting the cluster..
>> > If I don't register the serializer for DateTime the job doesn't start
>> at all
>> > (from Flink 1.x you have to register it [1]).
>> > I can't understand what's wrong :(
>> >
>> > [1]
>> >
>> https://cwiki.apache.org/confluence/display/FLINK/Migration+Guide%3A+0.10.x+to+1.0.x
>> >
>> > Best,
>> > Flavio
>>
>
>
>
> --
>
> Flavio Pompermaier
>
> *Development Department*_______________________________________________
> *OKKAM**Srl **- www.okkam.it <http://www.okkam.it/>*
>
> *Phone:* +(39) 0461 283 702
> *Fax:* + (39) 0461 186 6433
> *Email:* pomperma...@okkam.it
> *Headquarters:* Trento (Italy), via G.B. Trener 8
> *Registered office:* Trento (Italy), via Segantini 23
>
> Confidentially notice. This e-mail transmission may contain legally
> privileged and/or confidential information. Please do not read it if you
> are not the intended recipient(S). Any use, distribution, reproduction or
> disclosure by any other person is strictly prohibited. If you have received
> this e-mail in error, please notify the sender and destroy the original
> transmission and its attachments without reading or saving it in any manner.
>
>

Reply via email to