[ 
https://issues.apache.org/jira/browse/FLINK-3901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15332752#comment-15332752
 ] 

ASF GitHub Bot commented on FLINK-3901:
---------------------------------------

Github user fhueske commented on a diff in the pull request:

    https://github.com/apache/flink/pull/1989#discussion_r67257728
  
    --- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/sources/CsvTableSource.scala
 ---
    @@ -25,6 +25,8 @@ import 
org.apache.flink.api.java.typeutils.{TupleTypeInfoBase, TupleTypeInfo}
     import org.apache.flink.api.java.{ExecutionEnvironment, DataSet}
     import org.apache.flink.api.table.Row
     import org.apache.flink.core.fs.Path
    +import org.apache.flink.api.table.typeutils.RowTypeInfo
    +import org.apache.flink.api.java.io.RowCsvInputFormat
     
     /**
       * A [[TableSource]] for simple CSV files with up to 25 fields.
    --- End diff --
    
    Remove 25 field limitation


> Create a RowCsvInputFormat to use as default CSV IF in Table API
> ----------------------------------------------------------------
>
>                 Key: FLINK-3901
>                 URL: https://issues.apache.org/jira/browse/FLINK-3901
>             Project: Flink
>          Issue Type: Improvement
>    Affects Versions: 1.0.2
>            Reporter: Flavio Pompermaier
>            Assignee: Flavio Pompermaier
>            Priority: Minor
>              Labels: csv, null-values, row, tuple
>
> At the moment the Table APIs reads CSVs using the TupleCsvInputFormat, that 
> has the big limitation of 25 fields and null handling.
> A new IF producing Row object is indeed necessary to avoid those limitations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to