[ 
https://issues.apache.org/jira/browse/FLINK-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15859176#comment-15859176
 ] 

ASF GitHub Bot commented on FLINK-2186:
---------------------------------------

Github user ex00 commented on a diff in the pull request:

    https://github.com/apache/flink/pull/3012#discussion_r100247839
  
    --- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/RowTypeInfo.java 
---
    @@ -192,6 +193,28 @@ public void getFlatFields(String fieldExpression, int 
offset, List<FlatFieldDesc
                }
        }
     
    +   public RowTypeInfo(TypeInformation<?> mainType, int size, Map<Integer, 
TypeInformation<?>> additionalTypes) {
    +           this(configureTypes(mainType, size, additionalTypes));
    +   }
    +
    +   public RowTypeInfo(TypeInformation<?> mainType, int size) {
    +           this(configureTypes(mainType, size, Collections.<Integer, 
TypeInformation<?>>emptyMap()));
    +   }
    +
    +   private static TypeInformation<?>[] configureTypes(TypeInformation<?> 
mainType, int size, Map<Integer, TypeInformation<?>> additionalTypes) {
    --- End diff --
    
    Could you format argumets like as for ```RowTypeInfo#createComparator#219```
    for example:
    ```
    private static TypeInformation<?>[] configureTypes(
                TypeInformation<?> mainType,
                int size,
                Map<Integer, TypeInformation<?>> additionalTypes) {
    ```


> Rework CSV import to support very wide files
> --------------------------------------------
>
>                 Key: FLINK-2186
>                 URL: https://issues.apache.org/jira/browse/FLINK-2186
>             Project: Flink
>          Issue Type: Improvement
>          Components: Machine Learning Library, Scala API
>            Reporter: Theodore Vasiloudis
>            Assignee: Anton Solovev
>
> In the current readVcsFile implementation, importing CSV files with many 
> columns can become from cumbersome to impossible.
> For example to import an 11 column file we need to write:
> {code}
> val cancer = env.readCsvFile[(String, String, String, String, String, String, 
> String, String, String, String, 
> String)]("/path/to/breast-cancer-wisconsin.data")
> {code}
> For many use cases in Machine Learning we might have CSV files with thousands 
> or millions of columns that we want to import as vectors.
> In that case using the current readCsvFile method becomes impossible.
> We therefore need to rework the current function, or create a new one that 
> will allow us to import CSV files with an arbitrary number of columns.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to