[ 
https://issues.apache.org/jira/browse/MNEMONIC-511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17395460#comment-17395460
 ] 

Yanhui Zhao commented on MNEMONIC-511:
--------------------------------------

One of the possible integration point is at 
https://github.com/apache/flink/blob/779070274cf00cb475cb7bab1332317ba39ac16e/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/StreamExecutionEnvironment.scala

def readTextFile(filePath: String): DataStream[String] = 
asScalaStream(javaEnv.readTextFile(filePath))

which traces to the JavaEnv at 
org.apache.flink.api.java.{CollectionEnvironment, ExecutionEnvironment => 
JavaEnv}, need to further dig the JavaEnv class to see the return type of 
readTextFile function call.

> Apache Flink integration
> ------------------------
>
>                 Key: MNEMONIC-511
>                 URL: https://issues.apache.org/jira/browse/MNEMONIC-511
>             Project: Mnemonic
>          Issue Type: New Feature
>    Affects Versions: 0.16.0
>            Reporter: Kevin Ratnasekera
>            Assignee: Yanhui Zhao
>            Priority: Major
>             Fix For: 0.16.0
>
>
> I noticed existing Hadoop and Spark integration to Mnemonic project. I would 
> like to propose further integration Flink Engine to the project. We might be 
> able to reuse existing Hadoop input/output formats or write these input 
> formats from the scratch feed/sink data in/out for Flink jobs. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to