[jira] [Updated] (SQOOP-3379) Using option sqoop.jobbase.serialize.sqoopoptions=true gives NullPointerException

2018-09-03 Thread Siba Prasad Mishra (JIRA)


 [ 
https://issues.apache.org/jira/browse/SQOOP-3379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siba Prasad Mishra updated SQOOP-3379:
--
Priority: Minor  (was: Critical)

> Using option sqoop.jobbase.serialize.sqoopoptions=true gives 
> NullPointerException
> -
>
> Key: SQOOP-3379
> URL: https://issues.apache.org/jira/browse/SQOOP-3379
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.7
> Environment: Sqoop 1.4.7 on EMR (This happens in Sqoop 1.4.7)
>Reporter: Siba Prasad Mishra
>Priority: Minor
>
> Using option sqoop.jobbase.serialize.sqoopoptions to serialize all sqoop 
> options to hadoop configuration throws Null Pointer Exception when we run 
> sqoop command. 
> Given below the stack trace:
> 18/09/03 06:56:34 ERROR sqoop.Sqoop: Got exception running Sqoop: 
> java.lang.NullPointerException
> java.lang.NullPointerException
>         at org.json.JSONObject.(JSONObject.java:144)
>         at 
> org.apache.sqoop.util.SqoopJsonUtil.getJsonStringforMap(SqoopJsonUtil.java:43)
>         at 
> org.apache.sqoop.SqoopOptions.writeProperties(SqoopOptions.java:785)
>         at 
> org.apache.sqoop.mapreduce.JobBase.putSqoopOptionsToConfiguration(JobBase.java:392)
>         at org.apache.sqoop.mapreduce.JobBase.createJob(JobBase.java:378)
>         at 
> org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:253)
>         at 
> org.apache.sqoop.manager.SqlManager.importQuery(SqlManager.java:748)
>         at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:522)
>         at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:628)
>         at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>         at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
>         at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
>         at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
>         at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
>  
> Looks like there is no null checking while serializing customToolOptions.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (SQOOP-3379) Using option sqoop.jobbase.serialize.sqoopoptions=true gives NullPointerException

2018-09-03 Thread Siba Prasad Mishra (JIRA)
Siba Prasad Mishra created SQOOP-3379:
-

 Summary: Using option sqoop.jobbase.serialize.sqoopoptions=true 
gives NullPointerException
 Key: SQOOP-3379
 URL: https://issues.apache.org/jira/browse/SQOOP-3379
 Project: Sqoop
  Issue Type: Bug
Affects Versions: 1.4.7
 Environment: Sqoop 1.4.7 on EMR (This happens in Sqoop 1.4.7)
Reporter: Siba Prasad Mishra


Using option sqoop.jobbase.serialize.sqoopoptions to serialize all sqoop 
options to hadoop configuration throws Null Pointer Exception when we run sqoop 
command. 

Given below the stack trace:
18/09/03 06:56:34 ERROR sqoop.Sqoop: Got exception running Sqoop: 
java.lang.NullPointerException
java.lang.NullPointerException
        at org.json.JSONObject.(JSONObject.java:144)
        at 
org.apache.sqoop.util.SqoopJsonUtil.getJsonStringforMap(SqoopJsonUtil.java:43)
        at org.apache.sqoop.SqoopOptions.writeProperties(SqoopOptions.java:785)
        at 
org.apache.sqoop.mapreduce.JobBase.putSqoopOptionsToConfiguration(JobBase.java:392)
        at org.apache.sqoop.mapreduce.JobBase.createJob(JobBase.java:378)
        at 
org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:253)
        at org.apache.sqoop.manager.SqlManager.importQuery(SqlManager.java:748)
        at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:522)
        at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:628)
        at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
        at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
        at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
 
Looks like there is no null checking while serializing customToolOptions.
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (SQOOP-3058) Sqoop import with Netezza --direct fails properly but also produces NPE

2018-09-03 Thread Daniel Voros (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602110#comment-16602110
 ] 

Daniel Voros commented on SQOOP-3058:
-

[~kuldeepkulkarn...@gmail.com], I don't think there's a workaround, but please 
note that this issue is only about reporting an extra NPE in case of an error.

I've submitted a patch to throw a more meaningful exception.

> Sqoop import with Netezza --direct fails properly but also produces NPE
> ---
>
> Key: SQOOP-3058
> URL: https://issues.apache.org/jira/browse/SQOOP-3058
> Project: Sqoop
>  Issue Type: Bug
>Reporter: Markus Kemper
>Assignee: Daniel Voros
>Priority: Major
>
> The [error] is expected however the [npe] seems like a defect, see [test 
> case] below
> [error]
> ERROR:  relation does not exist SQOOP_SME_DB.SQOOP_SME1.SQOOP_SME1.T1
> [npe]
> 16/11/18 09:19:44 ERROR sqoop.Sqoop: Got exception running Sqoop: 
> java.lang.NullPointerException
> [test case]
> {noformat}
> #
> # STEP 01 - Setup Netezza Table and Data
> #
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "DROP TABLE SQOOP_SME1.T1"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "CREATE TABLE SQOOP_SME1.T1 (C1 INTEGER)"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "INSERT INTO SQOOP_SME1.T1 VALUES (1)"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "SELECT C1 FROM SQOOP_SME1.T1"
> #
> # STEP 02 - Test Import and Export (baseline)
> #
> sqoop import --connect $MYCONN --username $MYUSER --password $MYPSWD --table 
> "T1" --target-dir /user/root/t1 --delete-target-dir --num-mappers 1
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "DELETE FROM SQOOP_SME1.T1"
> sqoop export --connect $MYCONN --username $MYUSER --password $MYPSWD --table 
> "T1" --export-dir /user/root/t1 --num-mappers 1
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "SELECT C1 FROM SQOOP_SME1.T1"
> ---
> | C1  | 
> ---
> | 1   | 
> ---
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "DELETE FROM SQOOP_SME1.T1"
> sqoop export --connect $MYCONN --username $MYUSER --password $MYPSWD --table 
> "T1" --export-dir /user/root/t1 --num-mappers 1 --direct
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "SELECT C1 FROM SQOOP_SME1.T1"
> ---
> | C1  | 
> ---
> | 1   | 
> ---
>   
> #
> # STEP 03 - Test Import and Export (with SCHEMA in --table option AND 
> --direct)
> #
> /* Notes: This failure seems correct however the NPE after the failure seems 
> like a defect  */
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "DELETE FROM SQOOP_SME1.T1"
> sqoop export --connect $MYCONN --username $MYUSER --password $MYPSWD --table 
> "SQOOP_SME1.T1" --export-dir /user/root/t1 --num-mappers 1 --direct
> 16/11/18 09:19:44 ERROR manager.SqlManager: Error executing statement: 
> org.netezza.error.NzSQLException: ERROR:  relation does not exist 
> SQOOP_SME_DB.SQOOP_SME1.SQOOP_SME1.T1
> org.netezza.error.NzSQLException: ERROR:  relation does not exist 
> SQOOP_SME_DB.SQOOP_SME1.SQOOP_SME1.T1
>   at 
> org.netezza.internal.QueryExecutor.getNextResult(QueryExecutor.java:280)
>   at org.netezza.internal.QueryExecutor.execute(QueryExecutor.java:76)
>   at org.netezza.sql.NzConnection.execute(NzConnection.java:2869)
>   at 
> org.netezza.sql.NzPreparedStatament._execute(NzPreparedStatament.java:1126)
>   at 
> org.netezza.sql.NzPreparedStatament.prepare(NzPreparedStatament.java:1143)
>   at 
> org.netezza.sql.NzPreparedStatament.(NzPreparedStatament.java:89)
>   at org.netezza.sql.NzConnection.prepareStatement(NzConnection.java:1589)
>   at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:763)
>   at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:786)
>   at 
> org.apache.sqoop.manager.SqlManager.getColumnNamesForRawQuery(SqlManager.java:151)
>   at 
> org.apache.sqoop.manager.SqlManager.getColumnNames(SqlManager.java:116)
>   at 
> org.apache.sqoop.mapreduce.netezza.NetezzaExternalTableExportJob.configureOutputFormat(NetezzaExternalTableExportJob.java:128)
>   at 
> org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:433)
>   at 
> org.apache.sqoop.manager.DirectNetezzaManager.exportTable(DirectNetezzaManager.java:209)
>   at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:81)
>   at 

Review Request 68607: Sqoop import with Netezza --direct fails properly but also produces NPE

2018-09-03 Thread daniel voros

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/68607/
---

Review request for Sqoop.


Bugs: SQOOP-3058
https://issues.apache.org/jira/browse/SQOOP-3058


Repository: sqoop-trunk


Description
---

We're not interrupting the import if we were unable to get column names, that 
leads to NPE later. We should check for null instead and throw some more 
meaningful exception.


Diffs
-

  
src/java/org/apache/sqoop/mapreduce/netezza/NetezzaExternalTableExportJob.java 
11ac95df 
  
src/test/org/apache/sqoop/mapreduce/netezza/TestNetezzaExternalTableExportJob.java
 PRE-CREATION 


Diff: https://reviews.apache.org/r/68607/diff/1/


Testing
---

added UT


Thanks,

daniel voros



[jira] [Assigned] (SQOOP-3058) Sqoop import with Netezza --direct fails properly but also produces NPE

2018-09-03 Thread Daniel Voros (JIRA)


 [ 
https://issues.apache.org/jira/browse/SQOOP-3058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Voros reassigned SQOOP-3058:
---

Assignee: Daniel Voros

> Sqoop import with Netezza --direct fails properly but also produces NPE
> ---
>
> Key: SQOOP-3058
> URL: https://issues.apache.org/jira/browse/SQOOP-3058
> Project: Sqoop
>  Issue Type: Bug
>Reporter: Markus Kemper
>Assignee: Daniel Voros
>Priority: Major
>
> The [error] is expected however the [npe] seems like a defect, see [test 
> case] below
> [error]
> ERROR:  relation does not exist SQOOP_SME_DB.SQOOP_SME1.SQOOP_SME1.T1
> [npe]
> 16/11/18 09:19:44 ERROR sqoop.Sqoop: Got exception running Sqoop: 
> java.lang.NullPointerException
> [test case]
> {noformat}
> #
> # STEP 01 - Setup Netezza Table and Data
> #
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "DROP TABLE SQOOP_SME1.T1"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "CREATE TABLE SQOOP_SME1.T1 (C1 INTEGER)"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "INSERT INTO SQOOP_SME1.T1 VALUES (1)"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "SELECT C1 FROM SQOOP_SME1.T1"
> #
> # STEP 02 - Test Import and Export (baseline)
> #
> sqoop import --connect $MYCONN --username $MYUSER --password $MYPSWD --table 
> "T1" --target-dir /user/root/t1 --delete-target-dir --num-mappers 1
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "DELETE FROM SQOOP_SME1.T1"
> sqoop export --connect $MYCONN --username $MYUSER --password $MYPSWD --table 
> "T1" --export-dir /user/root/t1 --num-mappers 1
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "SELECT C1 FROM SQOOP_SME1.T1"
> ---
> | C1  | 
> ---
> | 1   | 
> ---
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "DELETE FROM SQOOP_SME1.T1"
> sqoop export --connect $MYCONN --username $MYUSER --password $MYPSWD --table 
> "T1" --export-dir /user/root/t1 --num-mappers 1 --direct
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "SELECT C1 FROM SQOOP_SME1.T1"
> ---
> | C1  | 
> ---
> | 1   | 
> ---
>   
> #
> # STEP 03 - Test Import and Export (with SCHEMA in --table option AND 
> --direct)
> #
> /* Notes: This failure seems correct however the NPE after the failure seems 
> like a defect  */
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "DELETE FROM SQOOP_SME1.T1"
> sqoop export --connect $MYCONN --username $MYUSER --password $MYPSWD --table 
> "SQOOP_SME1.T1" --export-dir /user/root/t1 --num-mappers 1 --direct
> 16/11/18 09:19:44 ERROR manager.SqlManager: Error executing statement: 
> org.netezza.error.NzSQLException: ERROR:  relation does not exist 
> SQOOP_SME_DB.SQOOP_SME1.SQOOP_SME1.T1
> org.netezza.error.NzSQLException: ERROR:  relation does not exist 
> SQOOP_SME_DB.SQOOP_SME1.SQOOP_SME1.T1
>   at 
> org.netezza.internal.QueryExecutor.getNextResult(QueryExecutor.java:280)
>   at org.netezza.internal.QueryExecutor.execute(QueryExecutor.java:76)
>   at org.netezza.sql.NzConnection.execute(NzConnection.java:2869)
>   at 
> org.netezza.sql.NzPreparedStatament._execute(NzPreparedStatament.java:1126)
>   at 
> org.netezza.sql.NzPreparedStatament.prepare(NzPreparedStatament.java:1143)
>   at 
> org.netezza.sql.NzPreparedStatament.(NzPreparedStatament.java:89)
>   at org.netezza.sql.NzConnection.prepareStatement(NzConnection.java:1589)
>   at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:763)
>   at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:786)
>   at 
> org.apache.sqoop.manager.SqlManager.getColumnNamesForRawQuery(SqlManager.java:151)
>   at 
> org.apache.sqoop.manager.SqlManager.getColumnNames(SqlManager.java:116)
>   at 
> org.apache.sqoop.mapreduce.netezza.NetezzaExternalTableExportJob.configureOutputFormat(NetezzaExternalTableExportJob.java:128)
>   at 
> org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:433)
>   at 
> org.apache.sqoop.manager.DirectNetezzaManager.exportTable(DirectNetezzaManager.java:209)
>   at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:81)
>   at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:100)
>   at org.apache.sqoop.Sqoop.run(Sqoop.java:143)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179)
>   at 

[jira] [Commented] (SQOOP-3378) Error during direct Netezza import/export can interrupt process in uncontrolled ways

2018-09-03 Thread Daniel Voros (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602058#comment-16602058
 ] 

Daniel Voros commented on SQOOP-3378:
-

Attached review request.

> Error during direct Netezza import/export can interrupt process in 
> uncontrolled ways
> 
>
> Key: SQOOP-3378
> URL: https://issues.apache.org/jira/browse/SQOOP-3378
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.7
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
> Fix For: 1.5.0, 3.0.0
>
>
> SQLException during JDBC operation in direct Netezza import/export signals 
> parent thread to fail fast by interrupting it (see 
> [here|https://github.com/apache/sqoop/blob/c814e58348308b05b215db427412cd6c0b21333e/src/java/org/apache/sqoop/mapreduce/db/netezza/NetezzaJDBCStatementRunner.java#L92]).
> We're [trying to process the interrupt in the 
> parent|https://github.com/apache/sqoop/blob/c814e58348308b05b215db427412cd6c0b21333e/src/java/org/apache/sqoop/mapreduce/db/netezza/NetezzaExternalTableExportMapper.java#L232]
>  (main) thread, but there's no guarantee that we're not in some blocking 
> internal call that will process the interrupted flag and reset it before 
> we're able to check.
> It is also possible that the parent thread has passed the "checking part" 
> when it gets interrupted. In case of {{NetezzaExternalTableExportMapper}} 
> this can interrupt the upload of log files.
> I'd recommend using some other means of communication between the threads 
> than interrupts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Review Request 68606: Error during direct Netezza import/export can interrupt process in uncontrolled ways

2018-09-03 Thread daniel voros

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/68606/
---

Review request for Sqoop.


Bugs: SQOOP-3378
https://issues.apache.org/jira/browse/SQOOP-3378


Repository: sqoop-trunk


Description
---

`SQLException` during JDBC operation in direct Netezza import/export signals 
parent thread to fail fast by interrupting it.
We're trying to process the interrupt in the parent (main) thread, but there's 
no guarantee that we're not in some internal call that will process the 
interrupted flag and reset it before we're able to check.

It is also possible that the parent thread has passed the "checking part" when 
it gets interrupted. In case of `NetezzaExternalTableExportMapper` this can 
interrupt the upload of log files.

I'd recommend using some other means of communication between the threads than 
interrupts.


Diffs
-

  
src/java/org/apache/sqoop/mapreduce/db/netezza/NetezzaExternalTableExportMapper.java
 5bf21880 
  
src/java/org/apache/sqoop/mapreduce/db/netezza/NetezzaExternalTableImportMapper.java
 306062aa 
  
src/java/org/apache/sqoop/mapreduce/db/netezza/NetezzaJDBCStatementRunner.java 
cedfd235 
  
src/test/org/apache/sqoop/mapreduce/db/netezza/TestNetezzaExternalTableExportMapper.java
 PRE-CREATION 
  
src/test/org/apache/sqoop/mapreduce/db/netezza/TestNetezzaExternalTableImportMapper.java
 PRE-CREATION 


Diff: https://reviews.apache.org/r/68606/diff/1/


Testing
---

added new UTs and checked manual Netezza tests (NetezzaExportManualTest, 
NetezzaImportManualTest)


Thanks,

daniel voros



[jira] [Created] (SQOOP-3378) Error during direct Netezza import/export can interrupt process in uncontrolled ways

2018-09-03 Thread Daniel Voros (JIRA)
Daniel Voros created SQOOP-3378:
---

 Summary: Error during direct Netezza import/export can interrupt 
process in uncontrolled ways
 Key: SQOOP-3378
 URL: https://issues.apache.org/jira/browse/SQOOP-3378
 Project: Sqoop
  Issue Type: Bug
Affects Versions: 1.4.7
Reporter: Daniel Voros
Assignee: Daniel Voros
 Fix For: 1.5.0, 3.0.0


SQLException during JDBC operation in direct Netezza import/export signals 
parent thread to fail fast by interrupting it (see 
[here|https://github.com/apache/sqoop/blob/c814e58348308b05b215db427412cd6c0b21333e/src/java/org/apache/sqoop/mapreduce/db/netezza/NetezzaJDBCStatementRunner.java#L92]).

We're [trying to process the interrupt in the 
parent|https://github.com/apache/sqoop/blob/c814e58348308b05b215db427412cd6c0b21333e/src/java/org/apache/sqoop/mapreduce/db/netezza/NetezzaExternalTableExportMapper.java#L232]
 (main) thread, but there's no guarantee that we're not in some blocking 
internal call that will process the interrupted flag and reset it before we're 
able to check.

It is also possible that the parent thread has passed the "checking part" when 
it gets interrupted. In case of {{NetezzaExternalTableExportMapper}} this can 
interrupt the upload of log files.

I'd recommend using some other means of communication between the threads than 
interrupts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (SQOOP-3375) HiveMiniCluster does not restore hive-site.xml location

2018-09-03 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16601986#comment-16601986
 ] 

Hudson commented on SQOOP-3375:
---

SUCCESS: Integrated in Jenkins build Sqoop-hadoop200 #1206 (See 
[https://builds.apache.org/job/Sqoop-hadoop200/1206/])
SQOOP-3375: HiveMiniCluster does not restore hive-site.xml location (bogi: 
[https://git-wip-us.apache.org/repos/asf?p=sqoop.git=commit=c814e58348308b05b215db427412cd6c0b21333e])
* (edit) src/test/org/apache/sqoop/hive/minicluster/HiveMiniCluster.java


> HiveMiniCluster does not restore hive-site.xml location
> ---
>
> Key: SQOOP-3375
> URL: https://issues.apache.org/jira/browse/SQOOP-3375
> Project: Sqoop
>  Issue Type: Sub-task
>Reporter: Szabolcs Vasas
>Assignee: Szabolcs Vasas
>Priority: Major
> Attachments: SQOOP-3375.patch
>
>
> HiveMiniCluster sets the hive-site.xml location using 
> org.apache.hadoop.hive.conf.HiveConf#setHiveSiteLocation static method during 
> startup but it does not restore the original location during shutdown.
> This makes HCatalogImportTest and HCatalogExportTest fail if they are ran in 
> the same JVM after any test using HiveMiniCluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (SQOOP-2949) SQL Syntax error when split-by column is of character type and min or max value has single quote inside it

2018-09-03 Thread Fero Szabo (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-2949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16601955#comment-16601955
 ] 

Fero Szabo commented on SQOOP-2949:
---

Hi [~gireeshp],

My email is [f...@cloudera.com|mailto:f...@cloudera.com] 

The release process doesn't have a defined schedule, yet, so there is no 
timeline. There is only 1 item left from the discussed items that is still 
pending (Hadoop 3 / Hive 3 / Hbase 2 support), i.e. just a library upgrade on 
the Sqoop side.

> SQL Syntax error when split-by column is of character type and min or max 
> value has single quote inside it
> --
>
> Key: SQOOP-2949
> URL: https://issues.apache.org/jira/browse/SQOOP-2949
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
> Environment: Sqoop 1.4.6
> Run on Hadoop 2.6.0
> On Ubuntu
>Reporter: Gireesh Puthumana
>Assignee: Gireesh Puthumana
>Priority: Major
>
> Did a sqoop import from mysql table "emp", with split-by column "ename", 
> which is a varchar(100) type.
> +Used below command:+
> sqoop import --connect jdbc:mysql://localhost/testdb --username root 
> --password * --table emp --m 2 --target-dir /sqoopTest/5 --split-by ename;
> +Ename has following records:+
> | ename   |
> | gireesh |
> | aavesh  |
> | shiva'  |
> | jamir   |
> | balu|
> | santosh |
> | sameer  |
> Min value is "aavesh" and max value is "shiva'" (please note the single quote 
> inside max value).
> When run, it tried to execute below query in mapper 2 and failed:
> SELECT `ename`, `eid`, `deptid` FROM `emp` AS `emp` WHERE ( `ename` >= 
> 'jd聯聭聪G耀' ) AND ( `ename` <= 'shiva'' )
> +Stack trace:+
> {quote}
> 2016-06-05 16:54:06,749 ERROR [main] 
> org.apache.sqoop.mapreduce.db.DBRecordReader: Top level exception: 
> com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error 
> in your SQL syntax; check the manual that corresponds to your MySQL server 
> version for the right syntax to use near ''shiva'' )' at line 1
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at com.mysql.jdbc.Util.handleNewInstance(Util.java:404)
>   at com.mysql.jdbc.Util.getInstance(Util.java:387)
>   at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:942)
>   at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3966)
>   at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3902)
>   at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2526)
>   at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2673)
>   at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2549)
>   at 
> com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1861)
>   at 
> com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:1962)
>   at 
> org.apache.sqoop.mapreduce.db.DBRecordReader.executeQuery(DBRecordReader.java:111)
>   at 
> org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:235)
>   at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:553)
>   at 
> org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
>   at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
>   at 
> org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (SQOOP-3375) HiveMiniCluster does not restore hive-site.xml location

2018-09-03 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16601956#comment-16601956
 ] 

ASF subversion and git services commented on SQOOP-3375:


Commit c814e58348308b05b215db427412cd6c0b21333e in sqoop's branch 
refs/heads/trunk from [~BoglarkaEgyed]
[ https://git-wip-us.apache.org/repos/asf?p=sqoop.git;h=c814e58 ]

SQOOP-3375: HiveMiniCluster does not restore hive-site.xml location

(Szabolcs Vasas via Boglarka Egyed)


> HiveMiniCluster does not restore hive-site.xml location
> ---
>
> Key: SQOOP-3375
> URL: https://issues.apache.org/jira/browse/SQOOP-3375
> Project: Sqoop
>  Issue Type: Sub-task
>Reporter: Szabolcs Vasas
>Assignee: Szabolcs Vasas
>Priority: Major
> Attachments: SQOOP-3375.patch
>
>
> HiveMiniCluster sets the hive-site.xml location using 
> org.apache.hadoop.hive.conf.HiveConf#setHiveSiteLocation static method during 
> startup but it does not restore the original location during shutdown.
> This makes HCatalogImportTest and HCatalogExportTest fail if they are ran in 
> the same JVM after any test using HiveMiniCluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 68569: HiveMiniCluster does not restore hive-site.xml location

2018-09-03 Thread daniel voros

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/68569/#review208249
---


Ship it!




Ship It!

- daniel voros


On Aug. 30, 2018, 11:27 a.m., Szabolcs Vasas wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/68569/
> ---
> 
> (Updated Aug. 30, 2018, 11:27 a.m.)
> 
> 
> Review request for Sqoop.
> 
> 
> Bugs: SQOOP-3375
> https://issues.apache.org/jira/browse/SQOOP-3375
> 
> 
> Repository: sqoop-trunk
> 
> 
> Description
> ---
> 
> HiveMiniCluster sets the hive-site.xml location using 
> org.apache.hadoop.hive.conf.HiveConf#setHiveSiteLocation static method during 
> startup but it does not restore the original location during shutdown.
> 
> This makes HCatalogImportTest and HCatalogExportTest fail if they are ran in 
> the same JVM after any test using HiveMiniCluster.
> 
> 
> Diffs
> -
> 
>   src/test/org/apache/sqoop/hive/minicluster/HiveMiniCluster.java 19bb7605c 
> 
> 
> Diff: https://reviews.apache.org/r/68569/diff/1/
> 
> 
> Testing
> ---
> 
> Executed unit and third party tests.
> 
> 
> Thanks,
> 
> Szabolcs Vasas
> 
>



Re: Review Request 68569: HiveMiniCluster does not restore hive-site.xml location

2018-09-03 Thread Fero Szabo via Review Board

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/68569/#review208248
---


Ship it!




Ship It!

- Fero Szabo


On Aug. 30, 2018, 11:27 a.m., Szabolcs Vasas wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/68569/
> ---
> 
> (Updated Aug. 30, 2018, 11:27 a.m.)
> 
> 
> Review request for Sqoop.
> 
> 
> Bugs: SQOOP-3375
> https://issues.apache.org/jira/browse/SQOOP-3375
> 
> 
> Repository: sqoop-trunk
> 
> 
> Description
> ---
> 
> HiveMiniCluster sets the hive-site.xml location using 
> org.apache.hadoop.hive.conf.HiveConf#setHiveSiteLocation static method during 
> startup but it does not restore the original location during shutdown.
> 
> This makes HCatalogImportTest and HCatalogExportTest fail if they are ran in 
> the same JVM after any test using HiveMiniCluster.
> 
> 
> Diffs
> -
> 
>   src/test/org/apache/sqoop/hive/minicluster/HiveMiniCluster.java 19bb7605c 
> 
> 
> Diff: https://reviews.apache.org/r/68569/diff/1/
> 
> 
> Testing
> ---
> 
> Executed unit and third party tests.
> 
> 
> Thanks,
> 
> Szabolcs Vasas
> 
>



[jira] [Updated] (SQOOP-3374) Assigning HDFS path to --bindir is giving error "java.lang.reflect.InvocationTargetException"

2018-09-03 Thread Amit Joshi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SQOOP-3374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Joshi updated SQOOP-3374:
--
Priority: Blocker  (was: Minor)

> Assigning HDFS path to --bindir is giving error 
> "java.lang.reflect.InvocationTargetException"
> -
>
> Key: SQOOP-3374
> URL: https://issues.apache.org/jira/browse/SQOOP-3374
> Project: Sqoop
>  Issue Type: Wish
>  Components: sqoop2-api
>Reporter: Amit Joshi
>Priority: Blocker
>
> When I am trying to assign the HDFS directory path to --bindir in my sqoop 
> command, it is throwing error "java.lang.reflect.InvocationTargetException".
> My sqoop query looks like this:
> sqoop import -connect connection_string --username username --password-file 
> file_path --query 'select * from EDW_PROD.RXCLM_LINE_FACT_DENIED 
> PARTITION(RXCLM_LINE_FACTP201808) where $CONDITIONS' --as-parquetfile 
> --compression-codec org.apache.hadoop.io.compress.SnappyCodec --append 
> --target-dir target_dir *-bindir hdfs://user/projects/* --split-by RX_ID 
> --null-string '/N' --null-non-string '/N' --fields-terminated-by ',' -m 10
>  
> It is creating folder "hdfs:" in my home directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)