GitHub user manishgupta88 opened a pull request:
https://github.com/apache/incubator-carbondata/pull/562
[CARBONDATA-668] Dataloads fail when no. of column in load query is
greater than the no. of column in create table
Problem: Dataloads fail when no. of column in load query is greater than
the no. of column in create table
Analysis: During data load mapping of columns order is maintained between
schema and columns provided in Fileheader/CSV file. If any duplicate column is
provided in the fileheader, the length of array exceeds for the mapping order
array as extra columns are getting matched and hence array index of bound
exception is thrown.
Fix: Add a check for looping only till the mapping order array length is
less than the number of schema columns.
Impact Area: Data load flow
Be sure to do all of the following to help us incorporate your contribution
quickly and easily:
- [ ] Make sure the PR title is formatted like:
`[CARBONDATA-<Jira issue #>] Description of pull request`
- [ ] Make sure tests pass via `mvn clean verify`. (Even better, enable
Travis-CI on your fork and ensure the whole test matrix passes).
- [ ] Replace `<Jira issue #>` in the title with the actual Jira issue
number, if there is one.
- [ ] If this contribution is large, please file an Apache
[Individual Contributor License
Agreement](https://www.apache.org/licenses/icla.txt).
- [ ] Testing done
Please provide details on
- Whether new unit test cases have been added or why no new tests
are required?
- What manual testing you have done?
- Any additional information to help reviewers in testing this
change.
- [ ] For large changes, please consider breaking it into sub-tasks under
an umbrella JIRA.
---
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/manishgupta88/incubator-carbondata
CARBONDATA-668_duplicate_file_header_column
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/incubator-carbondata/pull/562.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #562
----
commit 9309799721ea5d3e051d9778d9e58bfabff4a43b
Author: manishgupta88 <[email protected]>
Date: 2017-01-20T11:13:25Z
Problem: Dataloads fail when no. of column in load query is greater than
the no. of column in create table
Analysis: During data load mapping of columns order is maintained between
schema and columns provided in Fileheader/CSV file. If any duplicate column is
provided in the fileheader, the length of array exceeds for the mapping order
array as extra columns are getting matched and hence array index of bound
exception is thrown.
Fix: Add a check for looping only till the mapping order array length is
less than the number of schema columns.
Impact Area: Data load flow
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---