[
https://issues.apache.org/jira/browse/FLINK-4316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15413259#comment-15413259
]
ASF GitHub Bot commented on FLINK-4316:
---------------------------------------
Github user rmetzger commented on the issue:
https://github.com/apache/flink/pull/2338
+1 to merge, once the failing tests are fixed.
I think the exception to the API stability check is okay in this case. The
class is still in the same java package.
This are the test failures:
```
Results :
Failed tests:
PojoTypeExtractionTest.testPojoWC:203->checkWCPojoAsserts:244 position of
field complex.valueType wrong expected:<2> but was:<5>
Tests in error:
TypeInfoParserTest.testMultiDimensionalArray:321 » IllegalArgument
String coul...
TypeInfoParserTest.testPojoType:190 » IllegalArgument String could not
be pars...
```
> Make flink-core independent of Hadoop
> -------------------------------------
>
> Key: FLINK-4316
> URL: https://issues.apache.org/jira/browse/FLINK-4316
> Project: Flink
> Issue Type: Bug
> Components: Core
> Affects Versions: 1.1.0
> Reporter: Stephan Ewen
> Assignee: Stephan Ewen
> Fix For: 1.2.0
>
>
> We want to gradually reduce the hard and heavy mandatory dependencies in
> Hadoop. Hadoop will still be part of (most) flink downloads, but the API
> projects should not have a hard dependency on Hadoop.
> I suggest to start with {{flink-core}}, because it only depends on Hadoop for
> the {{Writable}} type, to support seamless operation of Hadoop types.
> I propose to move all {{WritableTypeInfo}}-related classes to the
> {{flink-hadoop-compatibility}} project and access them via reflection in the
> {{TypeExtractor}}.
> That way, {{Writable}} types will be out of the box supported if users have
> the {{flink-hadoop-compatibility}} project in the classpath.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)