[ 
https://issues.apache.org/jira/browse/PHOENIX-5258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17221695#comment-17221695
 ] 

Hadoop QA commented on PHOENIX-5258:
------------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  9s{color} 
| {color:red} PHOENIX-5258 does not apply to 4.x-HBase-1.4. Rebase required? 
Wrong Branch? See 
https://yetus.apache.org/documentation/in-progress/precommit-patchnames for 
help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | PHOENIX-5258 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968647/PHOENIX-5258-4.x-HBase-1.4.001.patch
 |
| Console output | 
https://ci-hadoop.apache.org/job/PreCommit-PHOENIX-Build/117/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |


This message was automatically generated.



> Add support to parse header from the input CSV file as input columns for 
> CsvBulkLoadTool
> ----------------------------------------------------------------------------------------
>
>                 Key: PHOENIX-5258
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-5258
>             Project: Phoenix
>          Issue Type: Improvement
>            Reporter: Prashant Vithani
>            Assignee: Prashant Vithani
>            Priority: Minor
>             Fix For: 4.15.1, 5.1.1, 4.16.1, 4.17.0
>
>         Attachments: PHOENIX-5258-4.x-HBase-1.4.001.patch, 
> PHOENIX-5258-4.x-HBase-1.4.patch, PHOENIX-5258-master.001.patch, 
> PHOENIX-5258-master.patch
>
>          Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently, CsvBulkLoadTool does not support reading header from the input csv 
> and expects the content of the csv to match with the table schema. The 
> support for the header can be added to dynamically map the schema with the 
> header.
> The proposed solution is to introduce another option for the tool 
> `–parse-header`. If this option is passed, the input columns list is 
> constructed by reading the first line of the input CSV file.
>  * If there is only one file, read the header from the first line and 
> generate the `ColumnInfo` list.
>  * If there are multiple files, read the header from all the files, and throw 
> an error if the headers across files do not match.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to