Hi,

I am running into a type erasure problem which only occurs when I execute the code using a Flink cluster (1.1.2). I created a Gist [1] which reproduces the problem. I also added a unit test to show that it does not fail in local and collection mode.

Maybe it is also interesting to mention that - in my actual code - I manually created a TypeInformation (the same which is automatically created on local execution) and gave it to the operators using .returns(..). However, this lead to the issue, that my field forwarding annotations failed with invalid reference exceptions (the same annotations that work locally).

The issue came up after I generalized the core of one our algorithms. Before, when the types were non-generic, this ran without problems locally and on the cluster.

Thanks in advance!

Cheers, Martin

[1] https://gist.github.com/s1ck/caf9f3f46e7a5afe6f6a73c479948fec

The exception in the Gist case:

The return type of function 'withPojo(Problem.java:58)' could not be determined automatically, due to type erasure. You can give type information hints by using the returns(...) method on the result of the transformation call, or by letting your function implement the 'ResultTypeQueryable' interface.
    org.apache.flink.api.java.DataSet.getType(DataSet.java:178)
    org.apache.flink.api.java.DataSet.collect(DataSet.java:407)
    org.apache.flink.api.java.DataSet.print(DataSet.java:1605)
    Problem.withPojo(Problem.java:60)
    Problem.main(Problem.java:38)

Reply via email to