[
https://issues.apache.org/jira/browse/FLINK-2444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14660371#comment-14660371
]
James Cao commented on FLINK-2444:
----------------------------------
Hi, for a sufficient test, what's the expected strategy?
The hive and hadoop community use hadoop minicluster to do end-to-end unit
test. I tried run a flink word count task against the minicluster inside the
ide, it takes about ~5s (including provisioning of the mini cluster, and tear
down the cluster afterwards.) Is this an acceptable running time?
I guess if we use minicluster, we can get relative sufficient test for the
HadoopInputFormats's wrapped "format" for both mapred and mapreduce style api,
and it's probably not very easy to set up a mock test that simulate the hadoop
fs environment. The problem with minicluster is that it's only available in
hadoop2. So it's not available in hadoop1 profile.
I think the issue I am working on [FLINK-1919] Hcatoutputformat also has a
similar problem. Do we want to run the test against a mini-hive server in that
case?
> Add tests for HadoopInputFormats
> --------------------------------
>
> Key: FLINK-2444
> URL: https://issues.apache.org/jira/browse/FLINK-2444
> Project: Flink
> Issue Type: Test
> Components: Hadoop Compatibility, Tests
> Affects Versions: 0.10, 0.9.0
> Reporter: Fabian Hueske
> Labels: starter
>
> The HadoopInputFormats and HadoopInputFormatBase classes are not sufficiently
> covered by unit tests.
> We need tests that ensure that the methods of the wrapped Hadoop InputFormats
> are correctly called.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)