Github user fhueske commented on the issue:

    https://github.com/apache/flink/pull/2637
  
    Thanks for the update @kenmy. We are trying to keep the Java and Scala APIs 
as close as possible. Could you convert the Scala `FlinkHadoopEnvironment` into 
a `HadoopInputs` class as well?
    
    I also noticed that there are quite a few Hadoop-related tests in the 
`flink-tests` module. I think it would be good to move the tests from the 
`org.apache.flink.test.hadoop` and `org.apache.flink.api.scala.hadoop` packages 
of `flink-tests` to `flink-hadoop-compatibility`. 
    
    In fact, there might be a bit of overlap with other tests in 
`flink-hadoop-compatibility`. It would be great if you could check for tests 
with overlapping test coverage. Then we could drop some of these tests.
    
    Thanks for your work,
    Fabian


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to