[jira] [Created] (SQOOP-3221) Enable ability to FTPS data from the mainframe

2017-08-10 Thread Tyler Seader (JIRA)
Tyler Seader created SQOOP-3221:
---

 Summary: Enable ability to FTPS data from the mainframe
 Key: SQOOP-3221
 URL: https://issues.apache.org/jira/browse/SQOOP-3221
 Project: Sqoop
  Issue Type: Improvement
Affects Versions: 1.99.7
Reporter: Tyler Seader


Sqoop currently leverages the org.apache.commons.net.ftp.FTPClient when using 
the *sqoop import-mainframe* command. There is an FTPSClient available that 
extends FTPClient. The improvement request is to enable an FTPSClient option so 
that data transfer over the wire is secure. 

See MainframeFTPClientUtils.java as a reference in Sqoop source code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (SQOOP-3132) sqoop export from Hive table stored in Parquet format to Oracle CLOB column results in (null)

2017-08-10 Thread Sandish Kumar HN (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandish Kumar HN reassigned SQOOP-3132:
---

Assignee: Sandish Kumar HN

> sqoop export from Hive table stored in Parquet format to Oracle CLOB column 
> results in (null)
> -
>
> Key: SQOOP-3132
> URL: https://issues.apache.org/jira/browse/SQOOP-3132
> Project: Sqoop
>  Issue Type: Bug
>  Components: connectors/oracle, hive-integration
>Affects Versions: 1.4.6
> Environment: sandbox
>Reporter: Ramprasad
>Assignee: Sandish Kumar HN
>Priority: Critical
>  Labels: beginner
>
> I am trying to export a String column from Hive table (stored in Parquet 
> format) to Oracle CLOB data type column using sqoop export. Below are the 
> commands I run for creation of tables in Oracle & Hive and, the sqoop command 
> I use to to export the data.
> Table creation & insert into Hive: 
> create table default.sqoop_oracle_clob_test (sample_id int, verylargestring 
> String) stored as PARQUET; 
> [SUCCESS] 
> insert into default.sqoop_oracle_clob_test (sample_id, verylargestring) 
> values (123, "Really a very large String"); 
> insert into default.sqoop_oracle_clob_test (sample_id, verylargestring) 
> values (456, "Another very large String"); 
> [SUCCESS]
> Table creation in Oracle 
> create table sqoop_exported_oracle (sample_id NUMBER, verylargestring CLOB); 
> [success] 
> Sqoop export command:
> sqoop \
> export \
> --connect jdbc:oracle:thin:@//host:port/database_name \
> --username ** \
> --password ** \
> --table sqoop_exported_oracle \
> --columns SAMPLE_ID,VERYLARGESTRING \
> --map-column-java "VERYLARGESTRING=String" \
> --hcatalog-table "sqoop_oracle_clob_test" \
> --hcatalog-database "default"
> sqoop job executes fine without any error messages and displays the message 
> Exported 2 records.
> The result in Oracle table is as below,
> select * from sqoop_exported_oracle;
> sample_id | verylargestring
> 123 | (null)
> 456 | (null) 
> I tried using --staging-table as well but, resulted in same. I suspect this 
> is a bug while exporting to oracle CLOB columns when retrieving from Hive 
> which is stored in parquet format.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (SQOOP-3007) Sqoop2: Improve repository upgrade mechanism

2017-08-10 Thread Boglarka Egyed (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boglarka Egyed updated SQOOP-3007:
--
Labels: features  (was: features newbie)

> Sqoop2: Improve repository upgrade mechanism
> 
>
> Key: SQOOP-3007
> URL: https://issues.apache.org/jira/browse/SQOOP-3007
> Project: Sqoop
>  Issue Type: Improvement
>  Components: sqoop2-derby-repository, sqoop2-postgresql-repository
>Affects Versions: 1.99.7
>Reporter: Boglarka Egyed
>  Labels: features
>
> Repository upgrade mechanism should be improved to support moving from a 
> release build of Sqoop2 to a cut of the sqoop2 (master) branch if a user 
> needs some upstream feature and then wants to switch back to release in the 
> future. This should provide a proper repository upgrading work throughout the 
> whole process so the user doesn't have to wait for the next release of Sqoop2 
> to fix a bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (SQOOP-3199) Sqoop 1.4.7 release preparation

2017-08-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SQOOP-3199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szabó Attila updated SQOOP-3199:

Attachment: text.html

Hey Bogi,

I'll correct it during this evening

Cheers
Attila

On Aug 10, 2017 3:01 PM, "Boglarka Egyed (JIRA)"  wrote:

[ 
https://issues.apache.org/jira/browse/SQOOP-3199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121574#comment-16121574
 ]

Boglarka Egyed commented on SQOOP-3199:
---

Hi [~maugli],

I have realized that SQOOP-3174 was not in a Resolved state nor was tagged to 
1.4.7 so I have resolved it and set the target version properly. I checked and 
it has been included into branch-1.4.7 - which is good - but it has not been 
included into the CHANGELOG.txt

Could you please correct it?

Thanks,
Bogi




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)



> Sqoop 1.4.7 release preparation
> ---
>
> Key: SQOOP-3199
> URL: https://issues.apache.org/jira/browse/SQOOP-3199
> Project: Sqoop
>  Issue Type: Task
>Reporter: Attila Szabo
>Assignee: Attila Szabo
> Fix For: no-release
>
> Attachments: text.html
>
>
> Umbrella jira for 1.4.6 release.
> For reference, the release wikis are:
> https://cwiki.apache.org/confluence/display/SQOOP/How+to+Release
> https://cwiki.apache.org/confluence/display/SQOOP/How+to+Release+Sqoop2



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (SQOOP-3007) Sqoop2: Improve repository upgrade mechanism

2017-08-10 Thread Boglarka Egyed (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boglarka Egyed reassigned SQOOP-3007:
-

Assignee: (was: Boglarka Egyed)

> Sqoop2: Improve repository upgrade mechanism
> 
>
> Key: SQOOP-3007
> URL: https://issues.apache.org/jira/browse/SQOOP-3007
> Project: Sqoop
>  Issue Type: Improvement
>  Components: sqoop2-derby-repository, sqoop2-postgresql-repository
>Affects Versions: 1.99.7
>Reporter: Boglarka Egyed
>  Labels: features, newbie
>
> Repository upgrade mechanism should be improved to support moving from a 
> release build of Sqoop2 to a cut of the sqoop2 (master) branch if a user 
> needs some upstream feature and then wants to switch back to release in the 
> future. This should provide a proper repository upgrading work throughout the 
> whole process so the user doesn't have to wait for the next release of Sqoop2 
> to fix a bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (SQOOP-3199) Sqoop 1.4.7 release preparation

2017-08-10 Thread Boglarka Egyed (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121574#comment-16121574
 ] 

Boglarka Egyed commented on SQOOP-3199:
---

Hi [~maugli],

I have realized that SQOOP-3174 was not in a Resolved state nor was tagged to 
1.4.7 so I have resolved it and set the target version properly. I checked and 
it has been included into branch-1.4.7 - which is good - but it has not been 
included into the CHANGELOG.txt

Could you please correct it?

Thanks,
Bogi 

> Sqoop 1.4.7 release preparation
> ---
>
> Key: SQOOP-3199
> URL: https://issues.apache.org/jira/browse/SQOOP-3199
> Project: Sqoop
>  Issue Type: Task
>Reporter: Attila Szabo
>Assignee: Attila Szabo
> Fix For: no-release
>
>
> Umbrella jira for 1.4.6 release.
> For reference, the release wikis are:
> https://cwiki.apache.org/confluence/display/SQOOP/How+to+Release
> https://cwiki.apache.org/confluence/display/SQOOP/How+to+Release+Sqoop2



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (SQOOP-3187) Sqoop import as PARQUET to S3 failed

2017-08-10 Thread Sandish Kumar HN (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandish Kumar HN resolved SQOOP-3187.
-
   Resolution: Fixed
Fix Version/s: 1.4.7

Upgraded KITE-SDK with the SQOOP-3192 issue.
Example Run:
sqoop import -D fs.s3n.awsAccessKeyId="" -D fs.s3n.awsSecretAccessKey="" 
--connect jdbc:mysql://localhost:3306/db1 --username  --password 
 --query "select * from t1 where \$CONDITIONS" --num-mappers 1 
--target-dir s3n://bucket/dataset/outfolder --as-parquetfile

> Sqoop import as PARQUET to S3 failed
> 
>
> Key: SQOOP-3187
> URL: https://issues.apache.org/jira/browse/SQOOP-3187
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
>Reporter: Surendra Nichenametla
>Assignee: Sandish Kumar HN
> Fix For: 1.4.7
>
>
> Sqoop import as parquet file to S3 fails. Command and error are give below.
> However, import to a HDFS location works though.
> sqoop import --connect "jdbc:oracle:thin:@:1521/ORCL" --table 
> mytable --username myuser --password mypass --target-dir s3://bucket/foo/bar/ 
> --columns col1,col2 -m1 --as-parquetfile
> 17/05/09 21:00:18 ERROR tool.ImportTool: Imported Failed: Wrong FS: 
> s3://bucket/foo/bar, expected: hdfs://master-ip:8020
> P.S. I tried this from Amazon EMR cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (SQOOP-3215) Sqoop create hive table to support other formats(avro,parquet)

2017-08-10 Thread Sandish Kumar HN (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121426#comment-16121426
 ] 

Sandish Kumar HN commented on SQOOP-3215:
-

[~ericlin] I'm interested to work on this? can I take this??

> Sqoop create hive table to support other formats(avro,parquet)
> --
>
> Key: SQOOP-3215
> URL: https://issues.apache.org/jira/browse/SQOOP-3215
> Project: Sqoop
>  Issue Type: Improvement
>Affects Versions: 1.4.6
>Reporter: Nitish Khanna
>Assignee: Eric Lin
>
> Hi Team,
> Sqoop doesn't support any other format apart from text format when we make 
> use of "create-hive-table".
> It would be great if sqoop could create avro,parquet etc format table(schema 
> only).
> I tried below command to create avro format table in hive.
> [root@host-10-17-81-13 ~]# sqoop create-hive-table --connect $MYCONN 
> --username $MYUSER --password $MYPSWD --table test_table --hive-table 
> test_table_avro --as-avrodatafile
> Warning: 
> /opt/cloudera/parcels/CDH-5.8.3-1.cdh5.8.3.p0.2/bin/../lib/sqoop/../accumulo 
> does not exist! Accumulo imports will fail.
> Please set $ACCUMULO_HOME to the root of your Accumulo installation.
> 17/07/26 21:23:38 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.8.3
> 17/07/26 21:23:38 WARN tool.BaseSqoopTool: Setting your password on the 
> command-line is insecure. Consider using -P instead.
> 17/07/26 21:23:38 ERROR tool.BaseSqoopTool: Error parsing arguments for 
> create-hive-table:
> 17/07/26 21:23:38 ERROR tool.BaseSqoopTool: Unrecognized argument: 
> --as-avrodatafile
> Please correct me if i missed anything.
> Regards
> Nitish Khanna



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (SQOOP-3187) Sqoop import as PARQUET to S3 failed

2017-08-10 Thread Sandish Kumar HN (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandish Kumar HN reassigned SQOOP-3187:
---

Assignee: Sandish Kumar HN  (was: Eric Lin)

> Sqoop import as PARQUET to S3 failed
> 
>
> Key: SQOOP-3187
> URL: https://issues.apache.org/jira/browse/SQOOP-3187
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
>Reporter: Surendra Nichenametla
>Assignee: Sandish Kumar HN
>
> Sqoop import as parquet file to S3 fails. Command and error are give below.
> However, import to a HDFS location works though.
> sqoop import --connect "jdbc:oracle:thin:@:1521/ORCL" --table 
> mytable --username myuser --password mypass --target-dir s3://bucket/foo/bar/ 
> --columns col1,col2 -m1 --as-parquetfile
> 17/05/09 21:00:18 ERROR tool.ImportTool: Imported Failed: Wrong FS: 
> s3://bucket/foo/bar, expected: hdfs://master-ip:8020
> P.S. I tried this from Amazon EMR cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)