[jira] [Commented] (SQOOP-3150) issue with sqoop hive import with partitions

2017-04-15 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15969934#comment-15969934
 ] 

Eric Lin commented on SQOOP-3150:
-

Will try to re-produce the issue first using latest sqoop code.

> issue with sqoop hive import with partitions
> 
>
> Key: SQOOP-3150
> URL: https://issues.apache.org/jira/browse/SQOOP-3150
> Project: Sqoop
>  Issue Type: Bug
>  Components: hive-integration
>Affects Versions: 1.4.6
> Environment: Cent-Os
>Reporter: Ankit Kumar
>Assignee: Eric Lin
>  Labels: features
>
> Sqoop Command:
>   sqoop import \
>   ...
>   --hive-import  \
>   --hive-overwrite  \
>   --hive-table employees_p  \
>   --hive-partition-key date  \
>   --hive-partition-value 10-03-2017  \
>   --target-dir ..\
>   -m 1  
>   
>   hive-table script:
>   employees_p is a partitioned table on date(string) column
>   
>   Issue:- 
>   Case1: When  --target-dir 
> /user/hdfs/landing/staging/Hive/partitioned/EMPLOYEES \
>   while running above sqoop command, gets an error "directory already 
> exissts".
>   
>   When : --target-dir 
> /user/hdfs/landing/staging/Hive/partitioned/EMPLOYEES/anyname 
>   2. Above sqoop command creates a hive partition (date=10-03-2017) and 
> directory as
>   '/user/hdfs/landing/staging/Hive/partitioned/EMPLOYEES/date=10-03-2017'
>   
> Expected Behaviour:- As in sqoop command  --hive-partition-key and  
> --hive-partition-value is present, so it should auto create partioned 
> directory inside EMPLOYEES.
> ie. '/user/hdfs/landing/staging/Hive/partitioned/EMPLOYEES/date=10-03-2017'



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (SQOOP-3150) issue with sqoop hive import with partitions

2017-04-15 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin reassigned SQOOP-3150:
---

Assignee: Eric Lin

> issue with sqoop hive import with partitions
> 
>
> Key: SQOOP-3150
> URL: https://issues.apache.org/jira/browse/SQOOP-3150
> Project: Sqoop
>  Issue Type: Bug
>  Components: hive-integration
>Affects Versions: 1.4.6
> Environment: Cent-Os
>Reporter: Ankit Kumar
>Assignee: Eric Lin
>  Labels: features
>
> Sqoop Command:
>   sqoop import \
>   ...
>   --hive-import  \
>   --hive-overwrite  \
>   --hive-table employees_p  \
>   --hive-partition-key date  \
>   --hive-partition-value 10-03-2017  \
>   --target-dir ..\
>   -m 1  
>   
>   hive-table script:
>   employees_p is a partitioned table on date(string) column
>   
>   Issue:- 
>   Case1: When  --target-dir 
> /user/hdfs/landing/staging/Hive/partitioned/EMPLOYEES \
>   while running above sqoop command, gets an error "directory already 
> exissts".
>   
>   When : --target-dir 
> /user/hdfs/landing/staging/Hive/partitioned/EMPLOYEES/anyname 
>   2. Above sqoop command creates a hive partition (date=10-03-2017) and 
> directory as
>   '/user/hdfs/landing/staging/Hive/partitioned/EMPLOYEES/date=10-03-2017'
>   
> Expected Behaviour:- As in sqoop command  --hive-partition-key and  
> --hive-partition-value is present, so it should auto create partioned 
> directory inside EMPLOYEES.
> ie. '/user/hdfs/landing/staging/Hive/partitioned/EMPLOYEES/date=10-03-2017'



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SQOOP-3158) Columns added to Mysql after initial sqoop import, export back to table with same schema fails

2017-04-15 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15969927#comment-15969927
 ] 

Eric Lin commented on SQOOP-3158:
-

Review: https://reviews.apache.org/r/58466/

> Columns added to Mysql after initial sqoop import, export back to table with 
> same schema fails 
> ---
>
> Key: SQOOP-3158
> URL: https://issues.apache.org/jira/browse/SQOOP-3158
> Project: Sqoop
>  Issue Type: Improvement
>Reporter: viru reddy
>Assignee: Eric Lin
>  Labels: newbie
> Attachments: SQOOP-3158.patch
>
>
> I have table in MySQL with 2 columns until yesterday. The columns are id and 
> name.
> 1,Raj
> 2,Jack
> I have imported this data into HDFS yesterday itself as a file. Today we 
> added a new column to the table in MySQL called salary. The table looks like 
> below.
> 1,Raj
> 2,Jack
> 3,Jill,2000
> 4,Nick,3000
> Now I have done Incremental import on this table as a file.
> Part-m-0 file contains
> 1,Raj
> 2,Jack
> Part-m-1 file contains
> 3,Jill,2000
> 4,Nick,3000
> Now I created a new table in MySQL with same schema as Original MySQL table 
> with columns id name and salary.
> When I do sqoop export only last 2 rows are getting inserted to the new table 
> in MySQL  and the sqoop export fails
> How can I reflect all the rows to be inserted to the table.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)