[jira] [Created] (SQOOP-2829) Sqoop2: LinkRestTest should pass when run against a real cluster

2016-02-10 Thread Abraham Fine (JIRA)
Abraham Fine created SQOOP-2829:
---

 Summary: Sqoop2: LinkRestTest should pass when run against a real 
cluster
 Key: SQOOP-2829
 URL: https://issues.apache.org/jira/browse/SQOOP-2829
 Project: Sqoop
  Issue Type: Bug
Reporter: Abraham Fine
Assignee: Abraham Fine


Currently the LinkRestTest creates a link from the generic-jdbc-connector. This 
link must spetcify a jdbc class and that class may not be on the real cluster. 
We should use a different connector.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SQOOP-2829) Sqoop2: LinkRestTest should pass when run against a real cluster

2016-02-10 Thread Abraham Fine (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-2829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abraham Fine updated SQOOP-2829:

Affects Version/s: 1.99.6

> Sqoop2: LinkRestTest should pass when run against a real cluster
> 
>
> Key: SQOOP-2829
> URL: https://issues.apache.org/jira/browse/SQOOP-2829
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.99.6
>Reporter: Abraham Fine
>Assignee: Abraham Fine
> Attachments: SQOOP-2829.patch
>
>
> Currently the LinkRestTest creates a link from the generic-jdbc-connector. 
> This link must spetcify a jdbc class and that class may not be on the real 
> cluster. We should use a different connector.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SQOOP-2829) Sqoop2: LinkRestTest should pass when run against a real cluster

2016-02-10 Thread Abraham Fine (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-2829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abraham Fine updated SQOOP-2829:

Attachment: SQOOP-2829.patch

> Sqoop2: LinkRestTest should pass when run against a real cluster
> 
>
> Key: SQOOP-2829
> URL: https://issues.apache.org/jira/browse/SQOOP-2829
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.99.6
>Reporter: Abraham Fine
>Assignee: Abraham Fine
> Attachments: SQOOP-2829.patch
>
>
> Currently the LinkRestTest creates a link from the generic-jdbc-connector. 
> This link must spetcify a jdbc class and that class may not be on the real 
> cluster. We should use a different connector.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SQOOP-2829) Sqoop2: LinkRestTest should pass when run against a real cluster

2016-02-10 Thread Abraham Fine (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-2829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141433#comment-15141433
 ] 

Abraham Fine commented on SQOOP-2829:
-

no rb due to simple patch

> Sqoop2: LinkRestTest should pass when run against a real cluster
> 
>
> Key: SQOOP-2829
> URL: https://issues.apache.org/jira/browse/SQOOP-2829
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.99.6
>Reporter: Abraham Fine
>Assignee: Abraham Fine
> Attachments: SQOOP-2829.patch
>
>
> Currently the LinkRestTest creates a link from the generic-jdbc-connector. 
> This link must spetcify a jdbc class and that class may not be on the real 
> cluster. We should use a different connector.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (SQOOP-2830) Sqoop2: mysql-connector-java should not be scoped to test

2016-02-10 Thread Abraham Fine (JIRA)
Abraham Fine created SQOOP-2830:
---

 Summary: Sqoop2: mysql-connector-java should not be scoped to test
 Key: SQOOP-2830
 URL: https://issues.apache.org/jira/browse/SQOOP-2830
 Project: Sqoop
  Issue Type: Bug
Reporter: Abraham Fine
Assignee: Abraham Fine


according to SQOOP-2391, mysql-connector-java is included in the binary 
distribution. this does not appear to be the case anymore so 
mysql-connector-java should no longer be scoped to test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SQOOP-2829) Sqoop2: LinkRestTest should pass when run against a real cluster

2016-02-10 Thread Sqoop QA bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-2829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141658#comment-15141658
 ] 

Sqoop QA bot commented on SQOOP-2829:
-

Testing file 
[SQOOP-2829.patch|https://issues.apache.org/jira/secure/attachment/12787306/SQOOP-2829.patch]
 against branch sqoop2 took 1:48:32.063948.

{color:red}Overall:{color} -1 due to an error(s), see details below:

{color:green}SUCCESS:{color} Clean was successful
{color:green}SUCCESS:{color} Patch applied correctly
{color:green}SUCCESS:{color} Patch add/modify test case
{color:green}SUCCESS:{color} License check passed
{color:green}SUCCESS:{color} Patch compiled
{color:red}ERROR:{color} Some of unit tests failed 
([report|https://builds.apache.org/job/PreCommit-SQOOP-Build/2186/artifact/patch-process/test_unit.txt],
 executed 1481 tests)
* Test {{org.apache.sqoop.connector.kafka.TestKafkaLoader}}


{color:green}SUCCESS:{color} Test coverage did not decreased 
([report|https://builds.apache.org/job/PreCommit-SQOOP-Build/2186/artifact/patch-process/cobertura_report.txt])
{color:green}SUCCESS:{color} No new findbugs warnings 
([report|https://builds.apache.org/job/PreCommit-SQOOP-Build/2186/artifact/patch-process/findbugs_report.txt])
{color:red}ERROR:{color} Some of integration tests failed 
([report|https://builds.apache.org/job/PreCommit-SQOOP-Build/2186/artifact/patch-process/test_integration.txt],
 executed 0 tests)
* Test {{org.apache.sqoop.connector.kafka.TestKafkaLoader}}



Console output is available 
[here|https://builds.apache.org/job/PreCommit-SQOOP-Build/2186/console].

This message is automatically generated.

> Sqoop2: LinkRestTest should pass when run against a real cluster
> 
>
> Key: SQOOP-2829
> URL: https://issues.apache.org/jira/browse/SQOOP-2829
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.99.6
>Reporter: Abraham Fine
>Assignee: Abraham Fine
> Attachments: SQOOP-2829.patch
>
>
> Currently the LinkRestTest creates a link from the generic-jdbc-connector. 
> This link must spetcify a jdbc class and that class may not be on the real 
> cluster. We should use a different connector.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (SQOOP-2830) Sqoop2: mysql-connector-java should not be scoped to test

2016-02-10 Thread Abraham Fine (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-2830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abraham Fine resolved SQOOP-2830.
-
Resolution: Invalid

we should not be shipping mysql jdbc driver at all

> Sqoop2: mysql-connector-java should not be scoped to test
> -
>
> Key: SQOOP-2830
> URL: https://issues.apache.org/jira/browse/SQOOP-2830
> Project: Sqoop
>  Issue Type: Bug
>Reporter: Abraham Fine
>Assignee: Abraham Fine
>
> according to SQOOP-2391, mysql-connector-java is included in the binary 
> distribution. this does not appear to be the case anymore so 
> mysql-connector-java should no longer be scoped to test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 43348: SQOOP-2828: Sqoop2: AvroIntermediateDataFormat should read Decimals as BigDecimals instead of Strings

2016-02-10 Thread Jarek Cecho

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/43348/#review118749
---


Ship it!




+1 as long as precommit hook will be happy.

- Jarek Cecho


On Feb. 8, 2016, 11:55 p.m., Abraham Fine wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/43348/
> ---
> 
> (Updated Feb. 8, 2016, 11:55 p.m.)
> 
> 
> Review request for Sqoop.
> 
> 
> Bugs: SQOOP-2828
> https://issues.apache.org/jira/browse/SQOOP-2828
> 
> 
> Repository: sqoop-sqoop2
> 
> 
> Description
> ---
> 
> init
> 
> 
> Diffs
> -
> 
>   
> connector/connector-sdk/src/main/java/org/apache/sqoop/connector/idf/AVROIntermediateDataFormat.java
>  e409fc1227c639cf8e04b7bf064854e9339bb77c 
>   
> connector/connector-sdk/src/test/java/org/apache/sqoop/connector/idf/TestAVROIntermediateDataFormat.java
>  847572076ee24f35b4c6a270972fdcccbc8ca596 
> 
> Diff: https://reviews.apache.org/r/43348/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Abraham Fine
> 
>



[jira] [Commented] (SQOOP-2828) Sqoop2: AvroIntermediateDataFormat should read Decimals as BigDecimals instead of Strings

2016-02-10 Thread Sqoop QA bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-2828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142061#comment-15142061
 ] 

Sqoop QA bot commented on SQOOP-2828:
-

Testing file 
[SQOOP-2828.patch|https://issues.apache.org/jira/secure/attachment/12786915/SQOOP-2828.patch]
 against branch sqoop2 took 1:25:27.154396.

{color:green}Overall:{color} +1 all checks pass

{color:green}SUCCESS:{color} Clean was successful
{color:green}SUCCESS:{color} Patch applied correctly
{color:green}SUCCESS:{color} Patch add/modify test case
{color:green}SUCCESS:{color} License check passed
{color:green}SUCCESS:{color} Patch compiled
{color:green}SUCCESS:{color} All unit tests passed (executed 1700 tests)
{color:green}SUCCESS:{color} Test coverage did not decreased 
([report|https://builds.apache.org/job/PreCommit-SQOOP-Build/2187/artifact/patch-process/cobertura_report.txt])
{color:green}SUCCESS:{color} No new findbugs warnings 
([report|https://builds.apache.org/job/PreCommit-SQOOP-Build/2187/artifact/patch-process/findbugs_report.txt])
{color:green}SUCCESS:{color} All integration tests passed (executed 207 tests)

Console output is available 
[here|https://builds.apache.org/job/PreCommit-SQOOP-Build/2187/console].

This message is automatically generated.

> Sqoop2: AvroIntermediateDataFormat should read Decimals as BigDecimals 
> instead of Strings
> -
>
> Key: SQOOP-2828
> URL: https://issues.apache.org/jira/browse/SQOOP-2828
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.99.6
>Reporter: Abraham Fine
>Assignee: Abraham Fine
> Attachments: SQOOP-2828.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SQOOP-2833) Sqoop2: Integration Tests: Allow setting which "time type" should be used based on the DatabaseProvider

2016-02-10 Thread Sqoop QA bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-2833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142262#comment-15142262
 ] 

Sqoop QA bot commented on SQOOP-2833:
-

Testing file 
[SQOOP-2833.patch|https://issues.apache.org/jira/secure/attachment/12787413/SQOOP-2833.patch]
 against branch sqoop2 took 2:01:23.640326.

{color:red}Overall:{color} -1 due to an error(s), see details below:

{color:green}SUCCESS:{color} Clean was successful
{color:green}SUCCESS:{color} Patch applied correctly
{color:green}SUCCESS:{color} Patch add/modify test case
{color:green}SUCCESS:{color} License check passed
{color:green}SUCCESS:{color} Patch compiled
{color:green}SUCCESS:{color} All unit tests passed (executed 1700 tests)
{color:green}SUCCESS:{color} Test coverage did not decreased 
([report|https://builds.apache.org/job/PreCommit-SQOOP-Build/2194/artifact/patch-process/cobertura_report.txt])
{color:green}SUCCESS:{color} No new findbugs warnings 
([report|https://builds.apache.org/job/PreCommit-SQOOP-Build/2194/artifact/patch-process/findbugs_report.txt])
{color:red}ERROR:{color} Some of integration tests failed 
([report|https://builds.apache.org/job/PreCommit-SQOOP-Build/2194/artifact/patch-process/test_integration.txt],
 executed 0 tests)

Console output is available 
[here|https://builds.apache.org/job/PreCommit-SQOOP-Build/2194/console].

This message is automatically generated.

> Sqoop2: Integration Tests: Allow setting which "time type" should be used 
> based on the DatabaseProvider
> ---
>
> Key: SQOOP-2833
> URL: https://issues.apache.org/jira/browse/SQOOP-2833
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.99.6
>Reporter: Abraham Fine
>Assignee: Abraham Fine
> Attachments: SQOOP-2833.patch
>
>
> The different databases we are looking to support behave differently with 
> respect to "time" data types. We should be able to dynamically choose the 
> right type for the test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SQOOP-2829) Sqoop2: LinkRestTest should pass when run against a real cluster

2016-02-10 Thread Sqoop QA bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-2829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142253#comment-15142253
 ] 

Sqoop QA bot commented on SQOOP-2829:
-

Testing file 
[SQOOP-2829.patch|https://issues.apache.org/jira/secure/attachment/12787306/SQOOP-2829.patch]
 against branch sqoop2 took 1:40:01.417590.

{color:red}Overall:{color} -1 due to an error(s), see details below:

{color:green}SUCCESS:{color} Clean was successful
{color:green}SUCCESS:{color} Patch applied correctly
{color:green}SUCCESS:{color} Patch add/modify test case
{color:green}SUCCESS:{color} License check passed
{color:green}SUCCESS:{color} Patch compiled
{color:green}SUCCESS:{color} All unit tests passed (executed 1700 tests)
{color:green}SUCCESS:{color} Test coverage did not decreased 
([report|https://builds.apache.org/job/PreCommit-SQOOP-Build/2193/artifact/patch-process/cobertura_report.txt])
{color:green}SUCCESS:{color} No new findbugs warnings 
([report|https://builds.apache.org/job/PreCommit-SQOOP-Build/2193/artifact/patch-process/findbugs_report.txt])
{color:red}ERROR:{color} Some of integration tests failed 
([report|https://builds.apache.org/job/PreCommit-SQOOP-Build/2193/artifact/patch-process/test_integration.txt],
 executed 0 tests)

Console output is available 
[here|https://builds.apache.org/job/PreCommit-SQOOP-Build/2193/console].

This message is automatically generated.

> Sqoop2: LinkRestTest should pass when run against a real cluster
> 
>
> Key: SQOOP-2829
> URL: https://issues.apache.org/jira/browse/SQOOP-2829
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.99.6
>Reporter: Abraham Fine
>Assignee: Abraham Fine
> Attachments: SQOOP-2829.patch
>
>
> Currently the LinkRestTest creates a link from the generic-jdbc-connector. 
> This link must spetcify a jdbc class and that class may not be on the real 
> cluster. We should use a different connector.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SQOOP-2821) Direct export to Netezza : user/owner confusion

2016-02-10 Thread Benjamin BONNET (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-2821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin BONNET updated SQOOP-2821:
---
Description: 
Hi,
when exporting to Netezza, if connected user (in the Netezza URL) is not the 
target table owner, things go wrong :
- if you do not use qualified table name, the table existence check will fail 
since SQOOP will assume table owner is the same as connected user
- if you do use qualified table name, the table existence check will succeed 
but table export will fail since SQOOP will try to export to a twice qualified 
table (db.owner.owner.table instead of db.owner.table)
Regards

  was:
Hi,
when exporting to Netezza, if connected user (in the Netezza URL) is not the 
target table owner, things go wrong :
- if you don not use qualified table name, the table existence check will fail 
since SQOOP will assume table owner is the same as connected user
- if you do use qualified table name, the table existence check will succeed 
but table export will fail since SQOOP will try to export to a twice qualified 
table (db.owner.owner.table instead of db.owner.table)
Regards


> Direct export to Netezza : user/owner confusion
> ---
>
> Key: SQOOP-2821
> URL: https://issues.apache.org/jira/browse/SQOOP-2821
> Project: Sqoop
>  Issue Type: Bug
>  Components: connectors
>Affects Versions: 1.4.6
>Reporter: Benjamin BONNET
>
> Hi,
> when exporting to Netezza, if connected user (in the Netezza URL) is not the 
> target table owner, things go wrong :
> - if you do not use qualified table name, the table existence check will fail 
> since SQOOP will assume table owner is the same as connected user
> - if you do use qualified table name, the table existence check will succeed 
> but table export will fail since SQOOP will try to export to a twice 
> qualified table (db.owner.owner.table instead of db.owner.table)
> Regards



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] sqoop pull request: SQOOP-2821

2016-02-10 Thread bonnetb
GitHub user bonnetb opened a pull request:

https://github.com/apache/sqoop/pull/12

SQOOP-2821

Direct export to Netezza : user/owner confusion

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bonnetb/sqoop SQOOP-2821

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/sqoop/pull/12.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #12


commit c0e82d67251ac3fc79e9b75215374cebd5d3eed2
Author: Benjamin BONNET 
Date:   2016-02-10T11:14:49Z

SQOOP-2821
Direct export to Netezza : user/owner confusion




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (SQOOP-2821) Direct export to Netezza : user/owner confusion

2016-02-10 Thread Benjamin BONNET (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-2821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin BONNET updated SQOOP-2821:
---
Attachment: SQOOP-2821.patch

> Direct export to Netezza : user/owner confusion
> ---
>
> Key: SQOOP-2821
> URL: https://issues.apache.org/jira/browse/SQOOP-2821
> Project: Sqoop
>  Issue Type: Bug
>  Components: connectors
>Affects Versions: 1.4.6
>Reporter: Benjamin BONNET
> Attachments: SQOOP-2821.patch
>
>
> Hi,
> when exporting to Netezza, if connected user (in the Netezza URL) is not the 
> target table owner, things go wrong :
> - if you do not use qualified table name, the table existence check will fail 
> since SQOOP will assume table owner is the same as connected user
> - if you do use qualified table name, the table existence check will succeed 
> but table export will fail since SQOOP will try to export to a twice 
> qualified table (db.owner.owner.table instead of db.owner.table)
> Regards



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SQOOP-2821) Direct export to Netezza : user/owner confusion

2016-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-2821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140644#comment-15140644
 ] 

ASF GitHub Bot commented on SQOOP-2821:
---

GitHub user bonnetb opened a pull request:

https://github.com/apache/sqoop/pull/12

SQOOP-2821

Direct export to Netezza : user/owner confusion

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bonnetb/sqoop SQOOP-2821

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/sqoop/pull/12.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #12


commit c0e82d67251ac3fc79e9b75215374cebd5d3eed2
Author: Benjamin BONNET 
Date:   2016-02-10T11:14:49Z

SQOOP-2821
Direct export to Netezza : user/owner confusion




> Direct export to Netezza : user/owner confusion
> ---
>
> Key: SQOOP-2821
> URL: https://issues.apache.org/jira/browse/SQOOP-2821
> Project: Sqoop
>  Issue Type: Bug
>  Components: connectors
>Affects Versions: 1.4.6
>Reporter: Benjamin BONNET
>
> Hi,
> when exporting to Netezza, if connected user (in the Netezza URL) is not the 
> target table owner, things go wrong :
> - if you do not use qualified table name, the table existence check will fail 
> since SQOOP will assume table owner is the same as connected user
> - if you do use qualified table name, the table existence check will succeed 
> but table export will fail since SQOOP will try to export to a twice 
> qualified table (db.owner.owner.table instead of db.owner.table)
> Regards



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SQOOP-2821) Direct export to Netezza : user/owner confusion

2016-02-10 Thread Benjamin BONNET (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-2821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140667#comment-15140667
 ] 

Benjamin BONNET commented on SQOOP-2821:


Hi,
This is how it works :
1) direct netezza manager checks if table exists, using table name if it is 
qualified (=> owner.table), or table name + connected user name (assuming 
user=owner)  if table name is not qualified
2) sqlmanager uses table name to get column names. But using a qualified name, 
sqlmanager will fail to get column names (since it will request for a twice 
qualified table  : owner.owner.table)
That patch just unqualifies table name so that sqlmanager can use it.
Regards

> Direct export to Netezza : user/owner confusion
> ---
>
> Key: SQOOP-2821
> URL: https://issues.apache.org/jira/browse/SQOOP-2821
> Project: Sqoop
>  Issue Type: Bug
>  Components: connectors
>Affects Versions: 1.4.6
>Reporter: Benjamin BONNET
> Attachments: SQOOP-2821.patch
>
>
> Hi,
> when exporting to Netezza, if connected user (in the Netezza URL) is not the 
> target table owner, things go wrong :
> - if you do not use qualified table name, the table existence check will fail 
> since SQOOP will assume table owner is the same as connected user
> - if you do use qualified table name, the table existence check will succeed 
> but table export will fail since SQOOP will try to export to a twice 
> qualified table (db.owner.owner.table instead of db.owner.table)
> Regards



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SQOOP-816) Scoop and support for external Hive tables

2016-02-10 Thread Virendhar Sivaraman (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140505#comment-15140505
 ] 

Virendhar Sivaraman commented on SQOOP-816:
---

+1 yes this feature will be useful

> Scoop and support for external Hive tables
> --
>
> Key: SQOOP-816
> URL: https://issues.apache.org/jira/browse/SQOOP-816
> Project: Sqoop
>  Issue Type: Improvement
>  Components: hive-integration
>Reporter: Santosh Achhra
>Priority: Minor
>  Labels: External, Hive,, Scoop,, Tables, newbie
>
> Sqoop is not supporting HIVE external tables at the moment. Any imports using 
> scoop creates a managed table, in real world scenario it is very important to 
> have EXTERNAL tables. As of now we have to execute ALTER statement to change 
> table properties to make the the table as external table which is not a big 
> deal but it would nice have an option in scoop to specify type of table which 
> is required



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SQOOP-2821) Direct export to Netezza : user/owner confusion

2016-02-10 Thread Benjamin BONNET (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-2821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin BONNET updated SQOOP-2821:
---
Attachment: (was: SQOOP-2821.patch)

> Direct export to Netezza : user/owner confusion
> ---
>
> Key: SQOOP-2821
> URL: https://issues.apache.org/jira/browse/SQOOP-2821
> Project: Sqoop
>  Issue Type: Bug
>  Components: connectors
>Affects Versions: 1.4.6
>Reporter: Benjamin BONNET
> Attachments: 0001-SQOOP-2821.patch
>
>
> Hi,
> when exporting to Netezza, if connected user (in the Netezza URL) is not the 
> target table owner, things go wrong :
> - if you do not use qualified table name, the table existence check will fail 
> since SQOOP will assume table owner is the same as connected user
> - if you do use qualified table name, the table existence check will succeed 
> but table export will fail since SQOOP will try to export to a twice 
> qualified table (db.owner.owner.table instead of db.owner.table)
> Regards



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SQOOP-2192) SQOOP IMPORT/EXPORT for the ORC file HIVE TABLE Failing

2016-02-10 Thread Sunil Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-2192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Kumar updated SQOOP-2192:
---
Summary: SQOOP IMPORT/EXPORT for the ORC file HIVE TABLE Failing  (was: 
SQOOP EXPORT for the ORC file HIVE TABLE Failing)

> SQOOP IMPORT/EXPORT for the ORC file HIVE TABLE Failing
> ---
>
> Key: SQOOP-2192
> URL: https://issues.apache.org/jira/browse/SQOOP-2192
> Project: Sqoop
>  Issue Type: Bug
>  Components: hive-integration
>Affects Versions: 1.4.5
> Environment: Hadoop 2.6.0
> Hive 1.0.0
> Sqoop 1.4.5
>Reporter: Sunil Kumar
>Assignee: Venkat Ranganathan
>
> We are trying to export RDMB table to Hive table for running Hive  delete, 
> update queries on exported Hive table. Since for the Hive to support delete, 
> update queries on following is required:
> 1. Needs to declare table as having Transaction Property
> 2. Table must be in ORC format
> 3. Tables must to be bucketed
> to do that i have create hive table using hcat:
> create table bookinfo(md5 STRING , isbn STRING , bookid STRING , booktitle 
> STRING , author STRING , yearofpub STRING , publisher STRING , imageurls 
> STRING , imageurlm STRING , imageurll STRING , price DOUBLE , totalrating 
> DOUBLE , totalusers BIGINT , maxrating INT , minrating INT , avgrating DOUBLE 
> , rawscore DOUBLE , norm_score DOUBLE) clustered by (md5) into 10 buckets 
> stored as orc TBLPROPERTIES('transactional'='true');
> then running sqoop import:
> sqoop import --verbose --connect 'RDBMS_JDBC_URL' --driver JDBC_DRIVER 
> --table bookinfo --null-string '\\N' --null-non-string '\\N' --username USER 
> --password PASSWPRD --hcatalog-database hive_test_trans --hcatalog-table 
> bookinfo --hcatalog-storage-stanza "storedas orc" -m 1
> Following exception is comming:
> 15/03/09 16:28:59 ERROR tool.ImportTool: Encountered IOException running 
> import job: org.apache.hive.hcatalog.common.HCatException : 2016 : Error 
> operation not supported : Store into a partition with bucket definition from 
> Pig/Mapreduce is not supported
> at 
> org.apache.hive.hcatalog.mapreduce.HCatOutputFormat.setOutput(HCatOutputFormat.java:109)
> at 
> org.apache.hive.hcatalog.mapreduce.HCatOutputFormat.setOutput(HCatOutputFormat.java:70)
> at 
> org.apache.sqoop.mapreduce.hcat.SqoopHCatUtilities.configureHCat(SqoopHCatUtilities.java:339)
> at 
> org.apache.sqoop.mapreduce.hcat.SqoopHCatUtilities.configureImportOutputFormat(SqoopHCatUtilities.java:753)
> at 
> org.apache.sqoop.mapreduce.ImportJobBase.configureOutputFormat(ImportJobBase.java:98)
> at 
> org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:240)
> at 
> org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:665)
> at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497)
> at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:601)
> at org.apache.sqoop.Sqoop.run(Sqoop.java:143)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179)
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218)
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
> at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
> Please let any futher details required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (SQOOP-2821) Direct export to Netezza : user/owner confusion

2016-02-10 Thread Benjamin BONNET (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-2821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin BONNET reassigned SQOOP-2821:
--

Assignee: Benjamin BONNET

> Direct export to Netezza : user/owner confusion
> ---
>
> Key: SQOOP-2821
> URL: https://issues.apache.org/jira/browse/SQOOP-2821
> Project: Sqoop
>  Issue Type: Bug
>  Components: connectors
>Affects Versions: 1.4.6
>Reporter: Benjamin BONNET
>Assignee: Benjamin BONNET
> Attachments: 0001-SQOOP-2821.patch
>
>
> Hi,
> when exporting to Netezza, if connected user (in the Netezza URL) is not the 
> target table owner, things go wrong :
> - if you do not use qualified table name, the table existence check will fail 
> since SQOOP will assume table owner is the same as connected user
> - if you do use qualified table name, the table existence check will succeed 
> but table export will fail since SQOOP will try to export to a twice 
> qualified table (db.owner.owner.table instead of db.owner.table)
> Regards



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SQOOP-2192) SQOOP IMPORT/EXPORT for the ORC file HIVE TABLE Failing

2016-02-10 Thread Sunil Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-2192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140693#comment-15140693
 ] 

Sunil Kumar commented on SQOOP-2192:


Virendhar SQOOP has issue in both import and export.. Sorry use case mentioned 
was of Import .. 

> SQOOP IMPORT/EXPORT for the ORC file HIVE TABLE Failing
> ---
>
> Key: SQOOP-2192
> URL: https://issues.apache.org/jira/browse/SQOOP-2192
> Project: Sqoop
>  Issue Type: Bug
>  Components: hive-integration
>Affects Versions: 1.4.5
> Environment: Hadoop 2.6.0
> Hive 1.0.0
> Sqoop 1.4.5
>Reporter: Sunil Kumar
>Assignee: Venkat Ranganathan
>
> We are trying to export RDMB table to Hive table for running Hive  delete, 
> update queries on exported Hive table. Since for the Hive to support delete, 
> update queries on following is required:
> 1. Needs to declare table as having Transaction Property
> 2. Table must be in ORC format
> 3. Tables must to be bucketed
> to do that i have create hive table using hcat:
> create table bookinfo(md5 STRING , isbn STRING , bookid STRING , booktitle 
> STRING , author STRING , yearofpub STRING , publisher STRING , imageurls 
> STRING , imageurlm STRING , imageurll STRING , price DOUBLE , totalrating 
> DOUBLE , totalusers BIGINT , maxrating INT , minrating INT , avgrating DOUBLE 
> , rawscore DOUBLE , norm_score DOUBLE) clustered by (md5) into 10 buckets 
> stored as orc TBLPROPERTIES('transactional'='true');
> then running sqoop import:
> sqoop import --verbose --connect 'RDBMS_JDBC_URL' --driver JDBC_DRIVER 
> --table bookinfo --null-string '\\N' --null-non-string '\\N' --username USER 
> --password PASSWPRD --hcatalog-database hive_test_trans --hcatalog-table 
> bookinfo --hcatalog-storage-stanza "storedas orc" -m 1
> Following exception is comming:
> 15/03/09 16:28:59 ERROR tool.ImportTool: Encountered IOException running 
> import job: org.apache.hive.hcatalog.common.HCatException : 2016 : Error 
> operation not supported : Store into a partition with bucket definition from 
> Pig/Mapreduce is not supported
> at 
> org.apache.hive.hcatalog.mapreduce.HCatOutputFormat.setOutput(HCatOutputFormat.java:109)
> at 
> org.apache.hive.hcatalog.mapreduce.HCatOutputFormat.setOutput(HCatOutputFormat.java:70)
> at 
> org.apache.sqoop.mapreduce.hcat.SqoopHCatUtilities.configureHCat(SqoopHCatUtilities.java:339)
> at 
> org.apache.sqoop.mapreduce.hcat.SqoopHCatUtilities.configureImportOutputFormat(SqoopHCatUtilities.java:753)
> at 
> org.apache.sqoop.mapreduce.ImportJobBase.configureOutputFormat(ImportJobBase.java:98)
> at 
> org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:240)
> at 
> org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:665)
> at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497)
> at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:601)
> at org.apache.sqoop.Sqoop.run(Sqoop.java:143)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179)
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218)
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
> at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
> Please let any futher details required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] sqoop pull request: [SQOOP-2607] Add a table encoding parameter fo...

2016-02-10 Thread bonnetb
Github user bonnetb closed the pull request at:

https://github.com/apache/sqoop/pull/9


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (SQOOP-2607) Direct import from Netezza and encoding

2016-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-2607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140745#comment-15140745
 ] 

ASF GitHub Bot commented on SQOOP-2607:
---

Github user bonnetb closed the pull request at:

https://github.com/apache/sqoop/pull/9


> Direct import from Netezza and encoding
> ---
>
> Key: SQOOP-2607
> URL: https://issues.apache.org/jira/browse/SQOOP-2607
> Project: Sqoop
>  Issue Type: Bug
>  Components: connectors
>Affects Versions: 1.4.6
>Reporter: Benjamin BONNET
>Assignee: Benjamin BONNET
> Fix For: 1.4.7
>
> Attachments: 
> 0001-Add-a-table-encoding-parameter-for-Netezza-direct-im.patch
>
>
> Hi,
> I encountered an encoding issue while importing a Netezza table containing 
> ISO-8859-15 encoded VARCHAR. Using direct mode, non ASCII chars are 
> corrupted. That does not occur using non-direct mode.
> Actually, direct mode uses a Netezza "external table", i.e. it flushes the 
> table into a stream using "internal" encoding (in my case, it is ISO-8859-15).
> But Sqoop import mapper reads this stream as an UTF-8 one.
> That problem does not occur using non direct mode since it uses Netezza JDBC 
> driver to map fields directly to Java types (no stream encoding involved).
> To have that issue fixed in my environment, I modified sqood netezza 
> connector and added a parameter to specify netezza varchar encoding. Default 
> value will be UTF-8 of course. I will make a pull request on github to 
> propose that enhancement.
> Regards



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (SQOOP-2833) Sqoop2: Integration Tests: Allow setting which "time type" should be used based on the DatabaseProvider

2016-02-10 Thread Abraham Fine (JIRA)
Abraham Fine created SQOOP-2833:
---

 Summary: Sqoop2: Integration Tests: Allow setting which "time 
type" should be used based on the DatabaseProvider
 Key: SQOOP-2833
 URL: https://issues.apache.org/jira/browse/SQOOP-2833
 Project: Sqoop
  Issue Type: Bug
Affects Versions: 1.99.6
Reporter: Abraham Fine
Assignee: Abraham Fine


The differrent databases we are looking to support behave differently with 
respect to "time" data types. We should be able to dynamically choose the right 
type for the test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SQOOP-2833) Sqoop2: Integration Tests: Allow setting which "time type" should be used based on the DatabaseProvider

2016-02-10 Thread Abraham Fine (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-2833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abraham Fine updated SQOOP-2833:

Description: The different databases we are looking to support behave 
differently with respect to "time" data types. We should be able to dynamically 
choose the right type for the test.  (was: The differrent databases we are 
looking to support behave differently with respect to "time" data types. We 
should be able to dynamically choose the right type for the test.)

> Sqoop2: Integration Tests: Allow setting which "time type" should be used 
> based on the DatabaseProvider
> ---
>
> Key: SQOOP-2833
> URL: https://issues.apache.org/jira/browse/SQOOP-2833
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.99.6
>Reporter: Abraham Fine
>Assignee: Abraham Fine
>
> The different databases we are looking to support behave differently with 
> respect to "time" data types. We should be able to dynamically choose the 
> right type for the test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SQOOP-2829) Sqoop2: LinkRestTest should pass when run against a real cluster

2016-02-10 Thread Sqoop QA bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-2829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142173#comment-15142173
 ] 

Sqoop QA bot commented on SQOOP-2829:
-

Testing file 
[SQOOP-2829.patch|https://issues.apache.org/jira/secure/attachment/12787306/SQOOP-2829.patch]
 against branch sqoop2 took 1:41:27.382060.

{color:red}Overall:{color} -1 due to an error(s), see details below:

{color:green}SUCCESS:{color} Clean was successful
{color:green}SUCCESS:{color} Patch applied correctly
{color:green}SUCCESS:{color} Patch add/modify test case
{color:green}SUCCESS:{color} License check passed
{color:green}SUCCESS:{color} Patch compiled
{color:green}SUCCESS:{color} All unit tests passed (executed 1700 tests)
{color:green}SUCCESS:{color} Test coverage did not decreased 
([report|https://builds.apache.org/job/PreCommit-SQOOP-Build/2190/artifact/patch-process/cobertura_report.txt])
{color:green}SUCCESS:{color} No new findbugs warnings 
([report|https://builds.apache.org/job/PreCommit-SQOOP-Build/2190/artifact/patch-process/findbugs_report.txt])
{color:red}ERROR:{color} Some of integration tests failed 
([report|https://builds.apache.org/job/PreCommit-SQOOP-Build/2190/artifact/patch-process/test_integration.txt],
 executed 0 tests)

Console output is available 
[here|https://builds.apache.org/job/PreCommit-SQOOP-Build/2190/console].

This message is automatically generated.

> Sqoop2: LinkRestTest should pass when run against a real cluster
> 
>
> Key: SQOOP-2829
> URL: https://issues.apache.org/jira/browse/SQOOP-2829
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.99.6
>Reporter: Abraham Fine
>Assignee: Abraham Fine
> Attachments: SQOOP-2829.patch
>
>
> Currently the LinkRestTest creates a link from the generic-jdbc-connector. 
> This link must spetcify a jdbc class and that class may not be on the real 
> cluster. We should use a different connector.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SQOOP-2833) Sqoop2: Integration Tests: Allow setting which "time type" should be used based on the DatabaseProvider

2016-02-10 Thread Abraham Fine (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-2833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abraham Fine updated SQOOP-2833:

Attachment: SQOOP-2833.patch

> Sqoop2: Integration Tests: Allow setting which "time type" should be used 
> based on the DatabaseProvider
> ---
>
> Key: SQOOP-2833
> URL: https://issues.apache.org/jira/browse/SQOOP-2833
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.99.6
>Reporter: Abraham Fine
>Assignee: Abraham Fine
> Attachments: SQOOP-2833.patch
>
>
> The different databases we are looking to support behave differently with 
> respect to "time" data types. We should be able to dynamically choose the 
> right type for the test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SQOOP-2832) Sqoop2: Precommit: Create log files for individual tests

2016-02-10 Thread Sqoop QA bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142181#comment-15142181
 ] 

Sqoop QA bot commented on SQOOP-2832:
-

Testing file 
[SQOOP-2832.patch|https://issues.apache.org/jira/secure/attachment/12787399/SQOOP-2832.patch]
 against branch sqoop2 took 1:35:13.565113.

{color:red}Overall:{color} -1 due to an error(s), see details below:

{color:green}SUCCESS:{color} Clean was successful
{color:green}SUCCESS:{color} Patch applied correctly
{color:green}SUCCESS:{color} Patch add/modify test case
{color:green}SUCCESS:{color} License check passed
{color:green}SUCCESS:{color} Patch compiled
{color:green}SUCCESS:{color} All unit tests passed (executed 1700 tests)
{color:green}SUCCESS:{color} Test coverage did not decreased 
([report|https://builds.apache.org/job/PreCommit-SQOOP-Build/2191/artifact/patch-process/cobertura_report.txt])
{color:orange}WARNING:{color} New findbugs warnings 
([report|https://builds.apache.org/job/PreCommit-SQOOP-Build/2191/artifact/patch-process/findbugs_report.txt])
* Package {{test}}: Class 
{{org.apache.sqoop.test.testng.ReconfigureLogListener}} introduced 1 completely 
new findbugs warnings.


{color:red}ERROR:{color} Some of integration tests failed 
([report|https://builds.apache.org/job/PreCommit-SQOOP-Build/2191/artifact/patch-process/test_integration.txt],
 executed 0 tests)

Console output is available 
[here|https://builds.apache.org/job/PreCommit-SQOOP-Build/2191/console].

This message is automatically generated.

> Sqoop2: Precommit: Create log files for individual tests
> 
>
> Key: SQOOP-2832
> URL: https://issues.apache.org/jira/browse/SQOOP-2832
> Project: Sqoop
>  Issue Type: Bug
>Reporter: Jarek Jarcec Cecho
>Assignee: Jarek Jarcec Cecho
> Fix For: 1.99.7
>
> Attachments: SQOOP-2832.patch
>
>
> I feel that debugging anything in our pre-commit hook is currently mission 
> impossible from several reasons. One of the main ones is that we're 
> generating one single log file that has [grown to 1GB in 
> size|https://builds.apache.org/job/PreCommit-SQOOP-Build/2185/artifact/test/target/surefire-reports/]
>  with all "test related" logs and I'm not even remotely able to find anything 
> there.
> In normal case we would have one log per test class, but as we've refactored 
> our integration tests to run multiple classes inside single Suite, we're no 
> longer have that for free. We have done this change to cut the time it takes 
> to run the integration tests - before we were initializing mini clusters for 
> each class, which is extra ~40 seconds per class, so I don't think that 
> reverting it would be reasonable.
> We should perhaps explore other ways how to get multiple log files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SQOOP-2831) Sqoop2: Precommit: Do not run multiple execution for integration tests

2016-02-10 Thread Jarek Jarcec Cecho (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-2831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142186#comment-15142186
 ] 

Jarek Jarcec Cecho commented on SQOOP-2831:
---

Failing unit tests means that it doesn't even make sense to look into 
integration tests as we did not finish building everything properly.


> Sqoop2: Precommit: Do not run multiple execution for integration tests
> --
>
> Key: SQOOP-2831
> URL: https://issues.apache.org/jira/browse/SQOOP-2831
> Project: Sqoop
>  Issue Type: Bug
>Reporter: Jarek Jarcec Cecho
>Assignee: Jarek Jarcec Cecho
> Fix For: 1.99.7
>
> Attachments: SQOOP-2831.patch
>
>
> We have several [different "executions" defined in our integration 
> tests|https://github.com/apache/sqoop/blob/sqoop2/test/pom.xml#L214]. It 
> seems that our purpose was to run one suite per execution, but the side 
> effect is that it's hard to run one single tests as my usual command line:
> {code}
> mvn clean integration-test -pl test -Dtest=$TEST
> {code}
> Will execute the test {{$TEST}} once for each execution defined.
> It seems that maven surefire plugin supports [multiple suites per single 
> execution|http://maven.apache.org/surefire/maven-surefire-plugin/examples/testng.html],
>  so I'm wondering if that will work for us as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SQOOP-2831) Sqoop2: Precommit: Do not run multiple execution for integration tests

2016-02-10 Thread Jarek Jarcec Cecho (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-2831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jarek Jarcec Cecho updated SQOOP-2831:
--
Attachment: SQOOP-2831.patch

> Sqoop2: Precommit: Do not run multiple execution for integration tests
> --
>
> Key: SQOOP-2831
> URL: https://issues.apache.org/jira/browse/SQOOP-2831
> Project: Sqoop
>  Issue Type: Bug
>Reporter: Jarek Jarcec Cecho
>Assignee: Jarek Jarcec Cecho
> Fix For: 1.99.7
>
> Attachments: SQOOP-2831.patch
>
>
> We have several [different "executions" defined in our integration 
> tests|https://github.com/apache/sqoop/blob/sqoop2/test/pom.xml#L214]. It 
> seems that our purpose was to run one suite per execution, but the side 
> effect is that it's hard to run one single tests as my usual command line:
> {code}
> mvn clean integration-test -pl test -Dtest=$TEST
> {code}
> Will execute the test {{$TEST}} once for each execution defined.
> It seems that maven surefire plugin supports [multiple suites per single 
> execution|http://maven.apache.org/surefire/maven-surefire-plugin/examples/testng.html],
>  so I'm wondering if that will work for us as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SQOOP-2829) Sqoop2: LinkRestTest should pass when run against a real cluster

2016-02-10 Thread Jarek Jarcec Cecho (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-2829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141900#comment-15141900
 ] 

Jarek Jarcec Cecho commented on SQOOP-2829:
---

+1 provided precommit hook will be happy

> Sqoop2: LinkRestTest should pass when run against a real cluster
> 
>
> Key: SQOOP-2829
> URL: https://issues.apache.org/jira/browse/SQOOP-2829
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.99.6
>Reporter: Abraham Fine
>Assignee: Abraham Fine
> Attachments: SQOOP-2829.patch
>
>
> Currently the LinkRestTest creates a link from the generic-jdbc-connector. 
> This link must spetcify a jdbc class and that class may not be on the real 
> cluster. We should use a different connector.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (SQOOP-2831) Sqoop2: Precommit: Do not run multiple execution for integration tests

2016-02-10 Thread Jarek Jarcec Cecho (JIRA)
Jarek Jarcec Cecho created SQOOP-2831:
-

 Summary: Sqoop2: Precommit: Do not run multiple execution for 
integration tests
 Key: SQOOP-2831
 URL: https://issues.apache.org/jira/browse/SQOOP-2831
 Project: Sqoop
  Issue Type: Bug
Reporter: Jarek Jarcec Cecho
Assignee: Jarek Jarcec Cecho
 Fix For: 1.99.7


We have several [different "executions" defined in our integration 
tests|https://github.com/apache/sqoop/blob/sqoop2/test/pom.xml#L214]. It seems 
that our purpose was to run one suite per execution, but the side effect is 
that it's hard to run one single tests as my usual command line:

{code}
mvn clean integration-test -pl test -Dtest=$TEST
{code}

Will execute the test {{$TEST}} once for each execution defined.

It seems that maven surefire plugin supports [multiple suites per single 
execution|http://maven.apache.org/surefire/maven-surefire-plugin/examples/testng.html],
 so I'm wondering if that will work for us as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SQOOP-2831) Sqoop2: Precommit: Do not run multiple execution for integration tests

2016-02-10 Thread Jarek Jarcec Cecho (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-2831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142000#comment-15142000
 ] 

Jarek Jarcec Cecho commented on SQOOP-2831:
---

I'm expecting possible failures in a way we're initializing the various suites, 
so let's see what will happen :)

> Sqoop2: Precommit: Do not run multiple execution for integration tests
> --
>
> Key: SQOOP-2831
> URL: https://issues.apache.org/jira/browse/SQOOP-2831
> Project: Sqoop
>  Issue Type: Bug
>Reporter: Jarek Jarcec Cecho
>Assignee: Jarek Jarcec Cecho
> Fix For: 1.99.7
>
> Attachments: SQOOP-2831.patch
>
>
> We have several [different "executions" defined in our integration 
> tests|https://github.com/apache/sqoop/blob/sqoop2/test/pom.xml#L214]. It 
> seems that our purpose was to run one suite per execution, but the side 
> effect is that it's hard to run one single tests as my usual command line:
> {code}
> mvn clean integration-test -pl test -Dtest=$TEST
> {code}
> Will execute the test {{$TEST}} once for each execution defined.
> It seems that maven surefire plugin supports [multiple suites per single 
> execution|http://maven.apache.org/surefire/maven-surefire-plugin/examples/testng.html],
>  so I'm wondering if that will work for us as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SQOOP-2829) Sqoop2: LinkRestTest should pass when run against a real cluster

2016-02-10 Thread Sqoop QA bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-2829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142065#comment-15142065
 ] 

Sqoop QA bot commented on SQOOP-2829:
-

Testing file 
[SQOOP-2829.patch|https://issues.apache.org/jira/secure/attachment/12787306/SQOOP-2829.patch]
 against branch sqoop2 took 1:09:35.215759.

{color:red}Overall:{color} -1 due to an error(s), see details below:

{color:green}SUCCESS:{color} Clean was successful
{color:green}SUCCESS:{color} Patch applied correctly
{color:green}SUCCESS:{color} Patch add/modify test case
{color:green}SUCCESS:{color} License check passed
{color:green}SUCCESS:{color} Patch compiled
{color:green}SUCCESS:{color} All unit tests passed (executed 1700 tests)
{color:green}SUCCESS:{color} Test coverage did not decreased 
([report|https://builds.apache.org/job/PreCommit-SQOOP-Build/2188/artifact/patch-process/cobertura_report.txt])
{color:green}SUCCESS:{color} No new findbugs warnings 
([report|https://builds.apache.org/job/PreCommit-SQOOP-Build/2188/artifact/patch-process/findbugs_report.txt])
{color:red}ERROR:{color} Some of integration tests failed 
([report|https://builds.apache.org/job/PreCommit-SQOOP-Build/2188/artifact/patch-process/test_integration.txt],
 executed 80 tests)
* Test {{integration-tests}}
* Test {{org.apache.sqoop.integration.connector.hdfs.ParquetTest}}
* Test {{org.apache.sqoop.integration.connector.hdfs.S3Test}}



Console output is available 
[here|https://builds.apache.org/job/PreCommit-SQOOP-Build/2188/console].

This message is automatically generated.

> Sqoop2: LinkRestTest should pass when run against a real cluster
> 
>
> Key: SQOOP-2829
> URL: https://issues.apache.org/jira/browse/SQOOP-2829
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.99.6
>Reporter: Abraham Fine
>Assignee: Abraham Fine
> Attachments: SQOOP-2829.patch
>
>
> Currently the LinkRestTest creates a link from the generic-jdbc-connector. 
> This link must spetcify a jdbc class and that class may not be on the real 
> cluster. We should use a different connector.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 43467: SQOOP-2832: Sqoop2: Precommit: Create log files for individual tests

2016-02-10 Thread Jarek Cecho

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/43467/
---

Review request for Sqoop and Jarek Cecho.


Bugs: SQOOP-2832
https://issues.apache.org/jira/browse/SQOOP-2832


Repository: sqoop-sqoop2


Description
---

I feel that debugging anything in our pre-commit hook is currently mission 
impossible from several reasons. One of the main ones is that we're generating 
one single log file that has [grown to 1GB in 
size|https://builds.apache.org/job/PreCommit-SQOOP-Build/2185/artifact/test/target/surefire-reports/]
 with all "test related" logs and I'm not even remotely able to find anything 
there.

In normal case we would have one log per test class, but as we've refactored 
our integration tests to run multiple classes inside single Suite, we're no 
longer have that for free. We have done this change to cut the time it takes to 
run the integration tests - before we were initializing mini clusters for each 
class, which is extra ~40 seconds per class, so I don't think that reverting it 
would be reasonable.

We should perhaps explore other ways how to get multiple log files.


Diffs
-

  test/pom.xml 134bca1 
  test/src/main/java/org/apache/sqoop/test/testng/ReconfigureLogListener.java 
PRE-CREATION 
  test/src/test/resources/integration-tests-suite.xml 73e0a77 

Diff: https://reviews.apache.org/r/43467/diff/


Testing
---


Thanks,

Jarek Cecho



[jira] [Commented] (SQOOP-2832) Sqoop2: Precommit: Create log files for individual tests

2016-02-10 Thread Jarek Jarcec Cecho (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142100#comment-15142100
 ] 

Jarek Jarcec Cecho commented on SQOOP-2832:
---

Here is my "poor's man solution" stab at the problem. It's well described in 
the patch, but for convenience:

{code}
 * Sqoop is running as much tests as possible inside one suite to safe time 
starting
 * miniclusters which is time consuming exercise (~40 seconds per single test 
class).
 * That however means that we have one output log file that recently grown to 
more then
 * 1GB in side and hence the usability has decreased.
 *
 * This listener will intercept each test and will reconfigure log4j to log 
directly
 * into files rather then to console (that would be forwarded by maven surefire 
plugin to
 * the normal log file). Each test will get it's own file which is easier for 
human to
 * read.
 *
 * We're using a counter to order log files per execution order rather then per 
name as
 * we can't guarantee log isolation entirely (e.g. some information relevant to 
test N
 * can be in log file for test N-1). It's easier to open previous log if you 
immediately
 * know what is the previous log.
{code}

I'm open to any other proposals - I just didn't find a different solution other 
then run one suite per test class which seems undesirable. I've put the new 
listener only to one suite for the time being to see how it will work on our 
jenkins infrastructure first.

> Sqoop2: Precommit: Create log files for individual tests
> 
>
> Key: SQOOP-2832
> URL: https://issues.apache.org/jira/browse/SQOOP-2832
> Project: Sqoop
>  Issue Type: Bug
>Reporter: Jarek Jarcec Cecho
>Assignee: Jarek Jarcec Cecho
> Fix For: 1.99.7
>
> Attachments: SQOOP-2832.patch
>
>
> I feel that debugging anything in our pre-commit hook is currently mission 
> impossible from several reasons. One of the main ones is that we're 
> generating one single log file that has [grown to 1GB in 
> size|https://builds.apache.org/job/PreCommit-SQOOP-Build/2185/artifact/test/target/surefire-reports/]
>  with all "test related" logs and I'm not even remotely able to find anything 
> there.
> In normal case we would have one log per test class, but as we've refactored 
> our integration tests to run multiple classes inside single Suite, we're no 
> longer have that for free. We have done this change to cut the time it takes 
> to run the integration tests - before we were initializing mini clusters for 
> each class, which is extra ~40 seconds per class, so I don't think that 
> reverting it would be reasonable.
> We should perhaps explore other ways how to get multiple log files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SQOOP-2831) Sqoop2: Precommit: Do not run multiple execution for integration tests

2016-02-10 Thread Sqoop QA bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-2831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142141#comment-15142141
 ] 

Sqoop QA bot commented on SQOOP-2831:
-

Testing file 
[SQOOP-2831.patch|https://issues.apache.org/jira/secure/attachment/12787384/SQOOP-2831.patch]
 against branch sqoop2 took 1:33:11.623906.

{color:red}Overall:{color} -1 due to an error(s), see details below:

{color:green}SUCCESS:{color} Clean was successful
{color:green}SUCCESS:{color} Patch applied correctly
{color:green}SUCCESS:{color} Patch add/modify test case
{color:green}SUCCESS:{color} License check passed
{color:green}SUCCESS:{color} Patch compiled
{color:red}ERROR:{color} Some of unit tests failed 
([report|https://builds.apache.org/job/PreCommit-SQOOP-Build/2189/artifact/patch-process/test_unit.txt],
 executed 1481 tests)
* Test {{org.apache.sqoop.connector.kafka.TestKafkaLoader}}


{color:green}SUCCESS:{color} Test coverage did not decreased 
([report|https://builds.apache.org/job/PreCommit-SQOOP-Build/2189/artifact/patch-process/cobertura_report.txt])
{color:green}SUCCESS:{color} No new findbugs warnings 
([report|https://builds.apache.org/job/PreCommit-SQOOP-Build/2189/artifact/patch-process/findbugs_report.txt])
{color:red}ERROR:{color} Some of integration tests failed 
([report|https://builds.apache.org/job/PreCommit-SQOOP-Build/2189/artifact/patch-process/test_integration.txt],
 executed 0 tests)
* Test {{org.apache.sqoop.connector.kafka.TestKafkaLoader}}



Console output is available 
[here|https://builds.apache.org/job/PreCommit-SQOOP-Build/2189/console].

This message is automatically generated.

> Sqoop2: Precommit: Do not run multiple execution for integration tests
> --
>
> Key: SQOOP-2831
> URL: https://issues.apache.org/jira/browse/SQOOP-2831
> Project: Sqoop
>  Issue Type: Bug
>Reporter: Jarek Jarcec Cecho
>Assignee: Jarek Jarcec Cecho
> Fix For: 1.99.7
>
> Attachments: SQOOP-2831.patch
>
>
> We have several [different "executions" defined in our integration 
> tests|https://github.com/apache/sqoop/blob/sqoop2/test/pom.xml#L214]. It 
> seems that our purpose was to run one suite per execution, but the side 
> effect is that it's hard to run one single tests as my usual command line:
> {code}
> mvn clean integration-test -pl test -Dtest=$TEST
> {code}
> Will execute the test {{$TEST}} once for each execution defined.
> It seems that maven surefire plugin supports [multiple suites per single 
> execution|http://maven.apache.org/surefire/maven-surefire-plugin/examples/testng.html],
>  so I'm wondering if that will work for us as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SQOOP-2828) Sqoop2: AvroIntermediateDataFormat should read Decimals as BigDecimals instead of Strings

2016-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-2828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142083#comment-15142083
 ] 

ASF subversion and git services commented on SQOOP-2828:


Commit edb42dbdc29f3834a7bb4eea291e15c8fce77053 in sqoop's branch 
refs/heads/sqoop2 from [~jarcec]
[ https://git-wip-us.apache.org/repos/asf?p=sqoop.git;h=edb42db ]

SQOOP-2828: Sqoop2: AvroIntermediateDataFormat should read Decimals as 
BigDecimals instead of Strings

(Abraham Fine via Jarek Jarcec Cecho)


> Sqoop2: AvroIntermediateDataFormat should read Decimals as BigDecimals 
> instead of Strings
> -
>
> Key: SQOOP-2828
> URL: https://issues.apache.org/jira/browse/SQOOP-2828
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.99.6
>Reporter: Abraham Fine
>Assignee: Abraham Fine
> Attachments: SQOOP-2828.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SQOOP-2832) Sqoop2: Precommit: Create log files for individual tests

2016-02-10 Thread Jarek Jarcec Cecho (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jarek Jarcec Cecho updated SQOOP-2832:
--
Attachment: SQOOP-2832.patch

> Sqoop2: Precommit: Create log files for individual tests
> 
>
> Key: SQOOP-2832
> URL: https://issues.apache.org/jira/browse/SQOOP-2832
> Project: Sqoop
>  Issue Type: Bug
>Reporter: Jarek Jarcec Cecho
>Assignee: Jarek Jarcec Cecho
> Fix For: 1.99.7
>
> Attachments: SQOOP-2832.patch
>
>
> I feel that debugging anything in our pre-commit hook is currently mission 
> impossible from several reasons. One of the main ones is that we're 
> generating one single log file that has [grown to 1GB in 
> size|https://builds.apache.org/job/PreCommit-SQOOP-Build/2185/artifact/test/target/surefire-reports/]
>  with all "test related" logs and I'm not even remotely able to find anything 
> there.
> In normal case we would have one log per test class, but as we've refactored 
> our integration tests to run multiple classes inside single Suite, we're no 
> longer have that for free. We have done this change to cut the time it takes 
> to run the integration tests - before we were initializing mini clusters for 
> each class, which is extra ~40 seconds per class, so I don't think that 
> reverting it would be reasonable.
> We should perhaps explore other ways how to get multiple log files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SQOOP-2828) Sqoop2: AvroIntermediateDataFormat should read Decimals as BigDecimals instead of Strings

2016-02-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-2828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142142#comment-15142142
 ] 

Hudson commented on SQOOP-2828:
---

SUCCESS: Integrated in Sqoop2 #1011 (See 
[https://builds.apache.org/job/Sqoop2/1011/])
SQOOP-2828: Sqoop2: AvroIntermediateDataFormat should read Decimals as (jarcec: 
[https://git-wip-us.apache.org/repos/asf?p=sqoop.git=commit=edb42dbdc29f3834a7bb4eea291e15c8fce77053])
* 
connector/connector-sdk/src/test/java/org/apache/sqoop/connector/idf/TestAVROIntermediateDataFormat.java
* 
connector/connector-sdk/src/main/java/org/apache/sqoop/connector/idf/AVROIntermediateDataFormat.java


> Sqoop2: AvroIntermediateDataFormat should read Decimals as BigDecimals 
> instead of Strings
> -
>
> Key: SQOOP-2828
> URL: https://issues.apache.org/jira/browse/SQOOP-2828
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.99.6
>Reporter: Abraham Fine
>Assignee: Abraham Fine
> Fix For: 1.99.7
>
> Attachments: SQOOP-2828.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)