[jira] [Commented] (SQOOP-2331) Snappy Compression Support in Sqoop-HCatalog

2018-10-11 Thread Fero Szabo (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-2331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646753#comment-16646753
 ] 

Fero Szabo commented on SQOOP-2331:
---

Hi [~standon],

I wonder if you've managed to find the time to work on this? Or, can you share 
any detail on when you might be able to?

Thanks,

Fero

> Snappy Compression Support in Sqoop-HCatalog
> 
>
> Key: SQOOP-2331
> URL: https://issues.apache.org/jira/browse/SQOOP-2331
> Project: Sqoop
>  Issue Type: New Feature
>Affects Versions: 1.4.7
>Reporter: Atul Gupta
>Assignee: Shashank
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: SQOOP-2331_0.patch, SQOOP-2331_1.patch, 
> SQOOP-2331_2.patch, SQOOP-2331_2.patch, SQOOP-2331_3.patch
>
>
> Current Apache Sqoop 1.4.7 does not compress in gzip format with 
>  --compress option while using with --hcatalog-table option. It also does not 
> support option --compression-codec snappy with --hcatalog-table option. it 
> would be nice if we add both the options in the Sqoop future releases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 68541: SQOOP-3104: Create test categories instead of test suites and naming conventions

2018-10-11 Thread Szabolcs Vasas


> On Sept. 10, 2018, 8:55 a.m., Szabolcs Vasas wrote:
> > src/test/org/apache/sqoop/manager/oracle/OraOopTestCase.java
> > Lines 55 (patched)
> > 
> >
> > This class should be an IntegrationTest too.

Sorry, my comment was misleanding here, I meant that OraOopTestCase should have 
both OracleTest and IntegrationTest categories.


> On Sept. 10, 2018, 8:55 a.m., Szabolcs Vasas wrote:
> > src/test/org/apache/sqoop/manager/sqlserver/SQLServerManagerTest.java
> > Lines 73 (patched)
> > 
> >
> > This test should be an integrationTest too.

Sorry, my comment was misleanding here, I meant that OraOopTestCase should have 
both OracleTest and IntegrationTest categories.


- Szabolcs


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/68541/#review208478
---


On Sept. 23, 2018, 2:01 a.m., Nguyen Truong wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/68541/
> ---
> 
> (Updated Sept. 23, 2018, 2:01 a.m.)
> 
> 
> Review request for Sqoop.
> 
> 
> Bugs: SQOOP-3104
> https://issues.apache.org/jira/browse/SQOOP-3104
> 
> 
> Repository: sqoop-trunk
> 
> 
> Description
> ---
> 
> We are currently unsing test naming conventions to differentiate between 
> ManualTests, Unit tests and 3rd party tests. Instead of that, I implemented 
> junit categories which will allow us to have more categories in the future. 
> This would also remove the reliance on the test class name.
> 
> Test categories skeleton:
>   SqoopTest _ UnitTest
>   |__ IntegrationTest
>   |__ ManualTest
> 
>   ThirdPartyTest _ CubridTest
>|__ Db2Test
>|__ MainFrameTest
>|__ MysqlTest
>|__ NetezzaTest
>|__ OracleTest
>|__ PostgresqlTest
>|__ SqlServerTest
> 
>   KerberizedTest
> 
> Categories explanation:
> * SqoopTest: Group of the big categories, including:
> - UnitTest: It tests one class only with its dependencies mocked or 
> if the dependency
> is lightweight we can keep it. It must not start a minicluster or an 
> hsqldb database.
> It does not need JCDB drivers.
> - IntegrationTest: It usually tests a whole scenario. It may start up 
> miniclusters,
> hsqldb and connect to external resources like RDBMSs.
> - ManualTest: This should be a deprecated category which should not 
> be used in the future.
> It only exists to mark the currently existing manual tests.
> * ThirdPartyTest: An orthogonal hierarchy for tests that need a JDBC 
> driver and/or a docker
> container/external RDBMS instance to run. Subcategories express what kind 
> of external
> resource the test needs. E.g: OracleTest needs an Oracle RDBMS and Oracle 
> driver on the classpath
> * KerberizedTest: Test that needs Kerberos, which needs to be run on a 
> separate JVM.
> 
> Opinions are very welcomed. Thanks!
> 
> 
> Diffs
> -
> 
>   build.gradle fc7fc0c4c 
>   src/test/org/apache/sqoop/TestConnFactory.java fb6c94059 
>   src/test/org/apache/sqoop/TestIncrementalImport.java 29c477954 
>   src/test/org/apache/sqoop/TestSqoopOptions.java e55682edf 
>   src/test/org/apache/sqoop/accumulo/TestAccumuloUtil.java 631eeff5e 
>   src/test/org/apache/sqoop/authentication/TestKerberosAuthenticator.java 
> f5700ce65 
>   src/test/org/apache/sqoop/db/TestDriverManagerJdbcConnectionFactory.java 
> 244831672 
>   
> src/test/org/apache/sqoop/db/decorator/TestKerberizedConnectionFactoryDecorator.java
>  d3e3fb23e 
>   src/test/org/apache/sqoop/hbase/HBaseImportAddRowKeyTest.java c4caafba5 
>   src/test/org/apache/sqoop/hbase/HBaseKerberizedConnectivityTest.java 
> 3bfb39178 
>   src/test/org/apache/sqoop/hbase/HBaseUtilTest.java c6a808c33 
>   src/test/org/apache/sqoop/hbase/TestHBasePutProcessor.java e78a535f4 
>   src/test/org/apache/sqoop/hcat/TestHCatalogBasic.java ba05cabbb 
>   
> src/test/org/apache/sqoop/hive/HiveServer2ConnectionFactoryInitializerTest.java
>  4d2cb2f88 
>   src/test/org/apache/sqoop/hive/TestHiveClientFactory.java a3c2dc939 
>   src/test/org/apache/sqoop/hive/TestHiveMiniCluster.java 419f888c0 
>   src/test/org/apache/sqoop/hive/TestHiveServer2Client.java 02617295e 
>   src/test/org/apache/sqoop/hive/TestHiveServer2ParquetImport.java b55179a4f 
>   src/test/org/apache/sqoop/hive/TestHiveServer2TextImport.java 410724f37 
>   src/test/org/apache/sqoop/hive/TestHiveTypesForAvroTypeMapping.java 
> 

Re: Review Request 68541: SQOOP-3104: Create test categories instead of test suites and naming conventions

2018-10-11 Thread Szabolcs Vasas

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/68541/#review209452
---



Hi Natalie,

Thank you for updating your patch, sorry for the misleading comments, please 
see my reply on them.
Once you make this change I think we can go ahead and commit this.

Regards,
Szabolcs

- Szabolcs Vasas


On Sept. 23, 2018, 2:01 a.m., Nguyen Truong wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/68541/
> ---
> 
> (Updated Sept. 23, 2018, 2:01 a.m.)
> 
> 
> Review request for Sqoop.
> 
> 
> Bugs: SQOOP-3104
> https://issues.apache.org/jira/browse/SQOOP-3104
> 
> 
> Repository: sqoop-trunk
> 
> 
> Description
> ---
> 
> We are currently unsing test naming conventions to differentiate between 
> ManualTests, Unit tests and 3rd party tests. Instead of that, I implemented 
> junit categories which will allow us to have more categories in the future. 
> This would also remove the reliance on the test class name.
> 
> Test categories skeleton:
>   SqoopTest _ UnitTest
>   |__ IntegrationTest
>   |__ ManualTest
> 
>   ThirdPartyTest _ CubridTest
>|__ Db2Test
>|__ MainFrameTest
>|__ MysqlTest
>|__ NetezzaTest
>|__ OracleTest
>|__ PostgresqlTest
>|__ SqlServerTest
> 
>   KerberizedTest
> 
> Categories explanation:
> * SqoopTest: Group of the big categories, including:
> - UnitTest: It tests one class only with its dependencies mocked or 
> if the dependency
> is lightweight we can keep it. It must not start a minicluster or an 
> hsqldb database.
> It does not need JCDB drivers.
> - IntegrationTest: It usually tests a whole scenario. It may start up 
> miniclusters,
> hsqldb and connect to external resources like RDBMSs.
> - ManualTest: This should be a deprecated category which should not 
> be used in the future.
> It only exists to mark the currently existing manual tests.
> * ThirdPartyTest: An orthogonal hierarchy for tests that need a JDBC 
> driver and/or a docker
> container/external RDBMS instance to run. Subcategories express what kind 
> of external
> resource the test needs. E.g: OracleTest needs an Oracle RDBMS and Oracle 
> driver on the classpath
> * KerberizedTest: Test that needs Kerberos, which needs to be run on a 
> separate JVM.
> 
> Opinions are very welcomed. Thanks!
> 
> 
> Diffs
> -
> 
>   build.gradle fc7fc0c4c 
>   src/test/org/apache/sqoop/TestConnFactory.java fb6c94059 
>   src/test/org/apache/sqoop/TestIncrementalImport.java 29c477954 
>   src/test/org/apache/sqoop/TestSqoopOptions.java e55682edf 
>   src/test/org/apache/sqoop/accumulo/TestAccumuloUtil.java 631eeff5e 
>   src/test/org/apache/sqoop/authentication/TestKerberosAuthenticator.java 
> f5700ce65 
>   src/test/org/apache/sqoop/db/TestDriverManagerJdbcConnectionFactory.java 
> 244831672 
>   
> src/test/org/apache/sqoop/db/decorator/TestKerberizedConnectionFactoryDecorator.java
>  d3e3fb23e 
>   src/test/org/apache/sqoop/hbase/HBaseImportAddRowKeyTest.java c4caafba5 
>   src/test/org/apache/sqoop/hbase/HBaseKerberizedConnectivityTest.java 
> 3bfb39178 
>   src/test/org/apache/sqoop/hbase/HBaseUtilTest.java c6a808c33 
>   src/test/org/apache/sqoop/hbase/TestHBasePutProcessor.java e78a535f4 
>   src/test/org/apache/sqoop/hcat/TestHCatalogBasic.java ba05cabbb 
>   
> src/test/org/apache/sqoop/hive/HiveServer2ConnectionFactoryInitializerTest.java
>  4d2cb2f88 
>   src/test/org/apache/sqoop/hive/TestHiveClientFactory.java a3c2dc939 
>   src/test/org/apache/sqoop/hive/TestHiveMiniCluster.java 419f888c0 
>   src/test/org/apache/sqoop/hive/TestHiveServer2Client.java 02617295e 
>   src/test/org/apache/sqoop/hive/TestHiveServer2ParquetImport.java b55179a4f 
>   src/test/org/apache/sqoop/hive/TestHiveServer2TextImport.java 410724f37 
>   src/test/org/apache/sqoop/hive/TestHiveTypesForAvroTypeMapping.java 
> 276e9eaa4 
>   src/test/org/apache/sqoop/hive/TestTableDefWriter.java 626ad22f6 
>   src/test/org/apache/sqoop/hive/TestTableDefWriterForExternalTable.java 
> f1768ee76 
>   src/test/org/apache/sqoop/importjob/avro/AvroImportForNumericTypesTest.java 
> ff13dc3bc 
>   src/test/org/apache/sqoop/io/TestCodecMap.java e71921823 
>   src/test/org/apache/sqoop/io/TestLobFile.java 2bc95f283 
>   src/test/org/apache/sqoop/io/TestNamedFifo.java a93784e08 
>   src/test/org/apache/sqoop/io/TestSplittableBufferedWriter.java c59aa26ad 
>   src/test/org/apache/sqoop/lib/TestBlobRef.java b271d3c7b 
>   

[jira] [Commented] (SQOOP-3378) Error during direct Netezza import/export can interrupt process in uncontrolled ways

2018-10-11 Thread Szabolcs Vasas (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646265#comment-16646265
 ] 

Szabolcs Vasas commented on SQOOP-3378:
---

It seems one of the new tests times out in the Jenkins job which seems really 
strange.

I have successfully ran this test with ant on both Mac OS and Ubuntu and worked 
fine.

This could an infrastructure issue as well, I will check it again later.

> Error during direct Netezza import/export can interrupt process in 
> uncontrolled ways
> 
>
> Key: SQOOP-3378
> URL: https://issues.apache.org/jira/browse/SQOOP-3378
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.7
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
> Fix For: 1.5.0, 3.0.0
>
> Attachments: SQOOP-3378.2.patch
>
>
> SQLException during JDBC operation in direct Netezza import/export signals 
> parent thread to fail fast by interrupting it (see 
> [here|https://github.com/apache/sqoop/blob/c814e58348308b05b215db427412cd6c0b21333e/src/java/org/apache/sqoop/mapreduce/db/netezza/NetezzaJDBCStatementRunner.java#L92]).
> We're [trying to process the interrupt in the 
> parent|https://github.com/apache/sqoop/blob/c814e58348308b05b215db427412cd6c0b21333e/src/java/org/apache/sqoop/mapreduce/db/netezza/NetezzaExternalTableExportMapper.java#L232]
>  (main) thread, but there's no guarantee that we're not in some blocking 
> internal call that will process the interrupted flag and reset it before 
> we're able to check.
> It is also possible that the parent thread has passed the "checking part" 
> when it gets interrupted. In case of {{NetezzaExternalTableExportMapper}} 
> this can interrupt the upload of log files.
> I'd recommend using some other means of communication between the threads 
> than interrupts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SQOOP-3326) Mainframe FTP listing for GDG should filter out non-GDG datasets in a heterogeneous listing

2018-10-11 Thread Chris Teoh (JIRA)


 [ 
https://issues.apache.org/jira/browse/SQOOP-3326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Teoh updated SQOOP-3326:
--
Attachment: SQOOP-3326-1.patch

> Mainframe FTP listing for GDG should filter out non-GDG datasets in a 
> heterogeneous listing
> ---
>
> Key: SQOOP-3326
> URL: https://issues.apache.org/jira/browse/SQOOP-3326
> Project: Sqoop
>  Issue Type: Improvement
>Reporter: Chris Teoh
>Assignee: Chris Teoh
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: SQOOP-3326-1.patch
>
>
> The FTP listing will automatically assume the first file in the listing is 
> the most recent GDG file. This is a problem when there are mixed datasets in 
> the listing that the FTP listing doesn't filter these out.
>  
> GDG base is : HLQ.ABC.DEF.AB15HUP
>  
> The sequential dataset in the middle of the GDG member listing is : 
> HLQ.ABC.DEF.AB15HUP.DATA
>  
> The pattern for listing GDG members should be : < name>>.G\d\{4}V\d\{2}
>  
>  Sample below:-
> {{   Menu  Options  View  Utilities  Compilers  Help  
>
>  
> ss
>  DSLIST - Data Sets Matching HLQ.ABC.DEF.GDGBASE   Row 1 of 8
>  Command ===>  Scroll ===> 
> PAGE
>   
>   
>  Command - Enter "/" to select action  Message   
> Volume
>  
> ---
>   HLQ.ABC.DEF.GDGBASE  ??
>   HLQ.ABC.DEF.GDGBASE.DUMMYSHT331
>   HLQ.ABC.DEF.GDGBASE.G0034V00 H19761
>   HLQ.ABC.DEF.GDGBASE.G0035V00 H81751
>   HLQ.ABC.DEF.GDGBASE.G0035V00.COPYSHT337
>   HLQ.ABC.DEF.GDGBASE.G0036V00 H73545
>   HLQ.ABC.DEF.GDGBASE.G0037V00 G10987
>   HLQ.ABC.DEF.GDGBASE.HELLOSHT33A
>  * End of Data Set list 
> 
> ftp> open some.machine.network.zxc.au
> Connected to some.machine.network.zxc.au (11.22.33.44).
> 220-TCPFTP01 IBM FTP CS V2R1 at some.machine.network.zxc.au, 00:12:29 on 
> 2018-05-29.
> 220 Connection will close if idle for more than 10 minutes.
> Name (some.machine.network.zxc.au:someuser):
> 331 Send password please.
> Password:
> 230 someuser is logged on.  Working directory is "someuser.".
> Remote system type is MVS.
> ftp> cd  'HLQ.ABC.DEF.GDGBASE'
> 250 "HLQ.ABC.DEF.GDGBASE." is the working directory name prefix.
> ftp> dir
> 227 Entering Passive Mode (11,22,33,44,55,66)
> 125 List started OK
> Volume UnitReferred Ext Used Recfm Lrecl BlkSz Dsorg Dsname
> H19761 Tape G0034V00
> H81751 Tape G0035V00
> H73545 Tape G0036V00
> G10987 Tape G0037V00
> SHT331 3390   **NONE**1   15  VB 114 27998  PS  DUMMY
> SHT337 3390   **NONE**1   15  VB 114 27998  PS  G0035V00.COPY
> SHT33A 3390   **NONE**1   15  VB 114 27998  PS  HELLO
> 250 List completed successfully.
> ftp>}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SQOOP-3327) Mainframe FTP needs to Include "Migrated" datasets when parsing the FTP list

2018-10-11 Thread Chris Teoh (JIRA)


 [ 
https://issues.apache.org/jira/browse/SQOOP-3327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Teoh updated SQOOP-3327:
--
Attachment: SQOOP-3327-1.patch

> Mainframe FTP needs to Include "Migrated" datasets when parsing the FTP list
> 
>
> Key: SQOOP-3327
> URL: https://issues.apache.org/jira/browse/SQOOP-3327
> Project: Sqoop
>  Issue Type: Improvement
>Reporter: Chris Teoh
>Assignee: Chris Teoh
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: SQOOP-3327-1.patch
>
>
> Need to Include "Migrated" datasets when parsing the FTP list.
>  
> ** This applies to sequential datasets as well as GDG members **
>  
> Identifying migrated datasets – when performing manual FTP
>  
> ftp> open abc.def.ghi.jkl.mno
> Connected to abc.def.ghi.jkl.mno (11.22.33.444).
> 220-TCPFTP01 Some FTP Server at abc.def.ghi.jkl.mno, 22:34:11 on 2018-01-22.
> 220 Connection will close if idle for more than 10 minutes.
> Name (abc.def.ghi.jkl.mno:some_user): some_user
> 331 Send password please.
> Password:
> 230 some_user is logged on.  Working directory is "some_user.".
> Remote system type is MVS.
> ftp> dir
> 227 Entering Passive Mode (33,44,555,66,7,8)
> 125 List started OK
> Volume Unit    Referred Ext Used Recfm Lrecl BlkSz Dsorg Dsname
> Migrated    DEV.DATA
> Migrated    DUMMY.DATA
> OVR343 3390   2018/01/23  1    1  FB 132 27984  PS  EMPTY
> Migrated    JCL.CNTL
> OVR346 3390   2018/01/22  1    1  FB  80 27920  PS  MIXED.FB80
> Migrated    PLAIN.FB80
> OVR341 3390   2018/01/23  1    9  VA 125   129  PS  PRDA.SPFLOG1.LIST
> G20427 Tape 
> UNLOAD.ABCDE.ZZ9UYT.FB.TAPE
> SEM352 3390   2018/01/23  1    1  FB 150  1500  PS  USER.BRODCAST
> OVR346 3390   2018/01/23  3    3  FB  80  6160  PO  USER.ISPPROF
> 250 List completed successfully.
>  
> "Migrated" should be included as one of the regex pattern searches.
> Assuming space delimited, first column will be "Migrated", and the second 
> (and final) column will contain the dataset name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (SQOOP-3378) Error during direct Netezza import/export can interrupt process in uncontrolled ways

2018-10-11 Thread Daniel Voros (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646190#comment-16646190
 ] 

Daniel Voros commented on SQOOP-3378:
-

Uploaded, thank you [~vasas].

> Error during direct Netezza import/export can interrupt process in 
> uncontrolled ways
> 
>
> Key: SQOOP-3378
> URL: https://issues.apache.org/jira/browse/SQOOP-3378
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.7
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
> Fix For: 1.5.0, 3.0.0
>
> Attachments: SQOOP-3378.2.patch
>
>
> SQLException during JDBC operation in direct Netezza import/export signals 
> parent thread to fail fast by interrupting it (see 
> [here|https://github.com/apache/sqoop/blob/c814e58348308b05b215db427412cd6c0b21333e/src/java/org/apache/sqoop/mapreduce/db/netezza/NetezzaJDBCStatementRunner.java#L92]).
> We're [trying to process the interrupt in the 
> parent|https://github.com/apache/sqoop/blob/c814e58348308b05b215db427412cd6c0b21333e/src/java/org/apache/sqoop/mapreduce/db/netezza/NetezzaExternalTableExportMapper.java#L232]
>  (main) thread, but there's no guarantee that we're not in some blocking 
> internal call that will process the interrupted flag and reset it before 
> we're able to check.
> It is also possible that the parent thread has passed the "checking part" 
> when it gets interrupted. In case of {{NetezzaExternalTableExportMapper}} 
> this can interrupt the upload of log files.
> I'd recommend using some other means of communication between the threads 
> than interrupts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SQOOP-3378) Error during direct Netezza import/export can interrupt process in uncontrolled ways

2018-10-11 Thread Daniel Voros (JIRA)


 [ 
https://issues.apache.org/jira/browse/SQOOP-3378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Voros updated SQOOP-3378:

Attachment: SQOOP-3378.2.patch

> Error during direct Netezza import/export can interrupt process in 
> uncontrolled ways
> 
>
> Key: SQOOP-3378
> URL: https://issues.apache.org/jira/browse/SQOOP-3378
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.7
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
> Fix For: 1.5.0, 3.0.0
>
> Attachments: SQOOP-3378.2.patch
>
>
> SQLException during JDBC operation in direct Netezza import/export signals 
> parent thread to fail fast by interrupting it (see 
> [here|https://github.com/apache/sqoop/blob/c814e58348308b05b215db427412cd6c0b21333e/src/java/org/apache/sqoop/mapreduce/db/netezza/NetezzaJDBCStatementRunner.java#L92]).
> We're [trying to process the interrupt in the 
> parent|https://github.com/apache/sqoop/blob/c814e58348308b05b215db427412cd6c0b21333e/src/java/org/apache/sqoop/mapreduce/db/netezza/NetezzaExternalTableExportMapper.java#L232]
>  (main) thread, but there's no guarantee that we're not in some blocking 
> internal call that will process the interrupted flag and reset it before 
> we're able to check.
> It is also possible that the parent thread has passed the "checking part" 
> when it gets interrupted. In case of {{NetezzaExternalTableExportMapper}} 
> this can interrupt the upload of log files.
> I'd recommend using some other means of communication between the threads 
> than interrupts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (SQOOP-3378) Error during direct Netezza import/export can interrupt process in uncontrolled ways

2018-10-11 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646179#comment-16646179
 ] 

Hudson commented on SQOOP-3378:
---

FAILURE: Integrated in Jenkins build Sqoop-hadoop200 #1212 (See 
[https://builds.apache.org/job/Sqoop-hadoop200/1212/])
SQOOP-3378: Error during direct Netezza import/export can interrupt (vasas: 
[https://git-wip-us.apache.org/repos/asf?p=sqoop.git=commit=40f0b74c012da917c6750a0fcce1f0ae13bd5f46])
* (add) 
src/test/org/apache/sqoop/mapreduce/db/netezza/TestNetezzaExternalTableExportMapper.java
* (edit) 
src/java/org/apache/sqoop/mapreduce/db/netezza/NetezzaExternalTableImportMapper.java
* (edit) 
src/java/org/apache/sqoop/mapreduce/db/netezza/NetezzaExternalTableExportMapper.java
* (add) 
src/test/org/apache/sqoop/mapreduce/db/netezza/TestNetezzaExternalTableImportMapper.java
* (edit) 
src/java/org/apache/sqoop/mapreduce/db/netezza/NetezzaJDBCStatementRunner.java


> Error during direct Netezza import/export can interrupt process in 
> uncontrolled ways
> 
>
> Key: SQOOP-3378
> URL: https://issues.apache.org/jira/browse/SQOOP-3378
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.7
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
> Fix For: 1.5.0, 3.0.0
>
>
> SQLException during JDBC operation in direct Netezza import/export signals 
> parent thread to fail fast by interrupting it (see 
> [here|https://github.com/apache/sqoop/blob/c814e58348308b05b215db427412cd6c0b21333e/src/java/org/apache/sqoop/mapreduce/db/netezza/NetezzaJDBCStatementRunner.java#L92]).
> We're [trying to process the interrupt in the 
> parent|https://github.com/apache/sqoop/blob/c814e58348308b05b215db427412cd6c0b21333e/src/java/org/apache/sqoop/mapreduce/db/netezza/NetezzaExternalTableExportMapper.java#L232]
>  (main) thread, but there's no guarantee that we're not in some blocking 
> internal call that will process the interrupted flag and reset it before 
> we're able to check.
> It is also possible that the parent thread has passed the "checking part" 
> when it gets interrupted. In case of {{NetezzaExternalTableExportMapper}} 
> this can interrupt the upload of log files.
> I'd recommend using some other means of communication between the threads 
> than interrupts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 68606: Error during direct Netezza import/export can interrupt process in uncontrolled ways

2018-10-11 Thread Szabolcs Vasas

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/68606/#review209446
---


Ship it!




Ship It!

- Szabolcs Vasas


On Sept. 3, 2018, 11:32 a.m., daniel voros wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/68606/
> ---
> 
> (Updated Sept. 3, 2018, 11:32 a.m.)
> 
> 
> Review request for Sqoop.
> 
> 
> Bugs: SQOOP-3378
> https://issues.apache.org/jira/browse/SQOOP-3378
> 
> 
> Repository: sqoop-trunk
> 
> 
> Description
> ---
> 
> `SQLException` during JDBC operation in direct Netezza import/export signals 
> parent thread to fail fast by interrupting it.
> We're trying to process the interrupt in the parent (main) thread, but 
> there's no guarantee that we're not in some internal call that will process 
> the interrupted flag and reset it before we're able to check.
> 
> It is also possible that the parent thread has passed the "checking part" 
> when it gets interrupted. In case of `NetezzaExternalTableExportMapper` this 
> can interrupt the upload of log files.
> 
> I'd recommend using some other means of communication between the threads 
> than interrupts.
> 
> 
> Diffs
> -
> 
>   
> src/java/org/apache/sqoop/mapreduce/db/netezza/NetezzaExternalTableExportMapper.java
>  5bf21880 
>   
> src/java/org/apache/sqoop/mapreduce/db/netezza/NetezzaExternalTableImportMapper.java
>  306062aa 
>   
> src/java/org/apache/sqoop/mapreduce/db/netezza/NetezzaJDBCStatementRunner.java
>  cedfd235 
>   
> src/test/org/apache/sqoop/mapreduce/db/netezza/TestNetezzaExternalTableExportMapper.java
>  PRE-CREATION 
>   
> src/test/org/apache/sqoop/mapreduce/db/netezza/TestNetezzaExternalTableImportMapper.java
>  PRE-CREATION 
> 
> 
> Diff: https://reviews.apache.org/r/68606/diff/2/
> 
> 
> Testing
> ---
> 
> added new UTs and checked manual Netezza tests (NetezzaExportManualTest, 
> NetezzaImportManualTest)
> 
> 
> Thanks,
> 
> daniel voros
> 
>



Re: Review Request 68606: Error during direct Netezza import/export can interrupt process in uncontrolled ways

2018-10-11 Thread Boglarka Egyed


> On Oct. 4, 2018, 12:46 p.m., Boglarka Egyed wrote:
> > Hi Daniel,
> > 
> > Apart from the discussion with Szabolcs about the expected exception 
> > handling I'm OK with your change. All tests passed.
> > 
> > Thanks,
> > Bogi
> 
> daniel voros wrote:
> Hey Bogi,
> 
> Thanks for reviewing! What do you mean by expected exception handling? 
> I'm happy to update the patch if you have concerns!
> 
> Regards,
> Daniel

Hi Daniel,

I apologize, I confused your patch with another one, sorry about that. Please 
ignore my previous comment regarding expected exceptions.
Ship it! :)

Regards,
Bogi


- Boglarka


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/68606/#review209221
---


On Sept. 3, 2018, 11:32 a.m., daniel voros wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/68606/
> ---
> 
> (Updated Sept. 3, 2018, 11:32 a.m.)
> 
> 
> Review request for Sqoop.
> 
> 
> Bugs: SQOOP-3378
> https://issues.apache.org/jira/browse/SQOOP-3378
> 
> 
> Repository: sqoop-trunk
> 
> 
> Description
> ---
> 
> `SQLException` during JDBC operation in direct Netezza import/export signals 
> parent thread to fail fast by interrupting it.
> We're trying to process the interrupt in the parent (main) thread, but 
> there's no guarantee that we're not in some internal call that will process 
> the interrupted flag and reset it before we're able to check.
> 
> It is also possible that the parent thread has passed the "checking part" 
> when it gets interrupted. In case of `NetezzaExternalTableExportMapper` this 
> can interrupt the upload of log files.
> 
> I'd recommend using some other means of communication between the threads 
> than interrupts.
> 
> 
> Diffs
> -
> 
>   
> src/java/org/apache/sqoop/mapreduce/db/netezza/NetezzaExternalTableExportMapper.java
>  5bf21880 
>   
> src/java/org/apache/sqoop/mapreduce/db/netezza/NetezzaExternalTableImportMapper.java
>  306062aa 
>   
> src/java/org/apache/sqoop/mapreduce/db/netezza/NetezzaJDBCStatementRunner.java
>  cedfd235 
>   
> src/test/org/apache/sqoop/mapreduce/db/netezza/TestNetezzaExternalTableExportMapper.java
>  PRE-CREATION 
>   
> src/test/org/apache/sqoop/mapreduce/db/netezza/TestNetezzaExternalTableImportMapper.java
>  PRE-CREATION 
> 
> 
> Diff: https://reviews.apache.org/r/68606/diff/2/
> 
> 
> Testing
> ---
> 
> added new UTs and checked manual Netezza tests (NetezzaExportManualTest, 
> NetezzaImportManualTest)
> 
> 
> Thanks,
> 
> daniel voros
> 
>



[jira] [Updated] (SQOOP-3361) Test compressing imported data with S3

2018-10-11 Thread Boglarka Egyed (JIRA)


 [ 
https://issues.apache.org/jira/browse/SQOOP-3361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boglarka Egyed updated SQOOP-3361:
--
Issue Type: Improvement  (was: Sub-task)
Parent: (was: SQOOP-3345)

> Test compressing imported data with S3
> --
>
> Key: SQOOP-3361
> URL: https://issues.apache.org/jira/browse/SQOOP-3361
> Project: Sqoop
>  Issue Type: Improvement
>Affects Versions: 1.4.7
>Reporter: Boglarka Egyed
>Assignee: Boglarka Egyed
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SQOOP-3361) Test compressing imported data with S3

2018-10-11 Thread Boglarka Egyed (JIRA)


 [ 
https://issues.apache.org/jira/browse/SQOOP-3361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boglarka Egyed updated SQOOP-3361:
--
Priority: Minor  (was: Major)

> Test compressing imported data with S3
> --
>
> Key: SQOOP-3361
> URL: https://issues.apache.org/jira/browse/SQOOP-3361
> Project: Sqoop
>  Issue Type: Improvement
>Affects Versions: 1.4.7
>Reporter: Boglarka Egyed
>Assignee: Boglarka Egyed
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SQOOP-3384) Document import into external Hive table backed by S3

2018-10-11 Thread Boglarka Egyed (JIRA)


 [ 
https://issues.apache.org/jira/browse/SQOOP-3384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boglarka Egyed updated SQOOP-3384:
--
Affects Version/s: 1.4.7

> Document import into external Hive table backed by S3
> -
>
> Key: SQOOP-3384
> URL: https://issues.apache.org/jira/browse/SQOOP-3384
> Project: Sqoop
>  Issue Type: Sub-task
>Affects Versions: 1.4.7
>Reporter: Boglarka Egyed
>Assignee: Boglarka Egyed
>Priority: Major
> Attachments: SQOOP-3384.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SQOOP-3376) Test import into external Hive table backed by S3

2018-10-11 Thread Boglarka Egyed (JIRA)


 [ 
https://issues.apache.org/jira/browse/SQOOP-3376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boglarka Egyed updated SQOOP-3376:
--
Affects Version/s: 1.4.7

> Test import into external Hive table backed by S3
> -
>
> Key: SQOOP-3376
> URL: https://issues.apache.org/jira/browse/SQOOP-3376
> Project: Sqoop
>  Issue Type: Sub-task
>Affects Versions: 1.4.7
>Reporter: Boglarka Egyed
>Assignee: Boglarka Egyed
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: SQOOP-3376.patch, SQOOP-3376.patch, SQOOP-3376.patch, 
> SQOOP-3376.patch, SQOOP-3376.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (SQOOP-3391) Test storing AWS credentials in Hadoop CredentialProvider during import

2018-10-11 Thread Boglarka Egyed (JIRA)
Boglarka Egyed created SQOOP-3391:
-

 Summary: Test storing AWS credentials in Hadoop CredentialProvider 
during import
 Key: SQOOP-3391
 URL: https://issues.apache.org/jira/browse/SQOOP-3391
 Project: Sqoop
  Issue Type: Sub-task
Affects Versions: 1.4.7
Reporter: Boglarka Egyed
Assignee: Boglarka Egyed






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SQOOP-3390) Document S3Guard usage with Sqoop

2018-10-11 Thread Boglarka Egyed (JIRA)


 [ 
https://issues.apache.org/jira/browse/SQOOP-3390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boglarka Egyed updated SQOOP-3390:
--
Affects Version/s: 1.4.7

> Document S3Guard usage with Sqoop
> -
>
> Key: SQOOP-3390
> URL: https://issues.apache.org/jira/browse/SQOOP-3390
> Project: Sqoop
>  Issue Type: Sub-task
>Affects Versions: 1.4.7
>Reporter: Boglarka Egyed
>Assignee: Boglarka Egyed
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SQOOP-3387) Include Column-Remarks

2018-10-11 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SQOOP-3387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomas Sebastian Hätälä updated SQOOP-3387:
--
Description: 
In most RDBMS it is possible to enter comments/ remarks for table and view 
columns. That way a user can obtain additional information regarding the data 
and how to use it.

With the avro file format it would be possible to store this information in the 
schema file using the "doc"-tag. At the moment this is, however, left blanc.

Review: https://reviews.apache.org/r/68989/

  was:
In most RDBMS it is possible to enter comments/ remarks for table and view 
columns. That way a user can obtain additional information regarding the data 
and how to use it.

With the avro file format it would be possible to store this information in the 
schema file using the "doc"-tag. At the moment this is, however, left blanc.


> Include Column-Remarks
> --
>
> Key: SQOOP-3387
> URL: https://issues.apache.org/jira/browse/SQOOP-3387
> Project: Sqoop
>  Issue Type: Wish
>  Components: connectors, metastore
>Affects Versions: 1.4.7
>Reporter: Tomas Sebastian Hätälä
>Assignee: Tomas Sebastian Hätälä
>Priority: Critical
>  Labels: easy-fix, features, pull-request-available
> Fix For: 1.5.0
>
> Attachments: SQOOP_3387.patch
>
>
> In most RDBMS it is possible to enter comments/ remarks for table and view 
> columns. That way a user can obtain additional information regarding the data 
> and how to use it.
> With the avro file format it would be possible to store this information in 
> the schema file using the "doc"-tag. At the moment this is, however, left 
> blanc.
> Review: https://reviews.apache.org/r/68989/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 68989: [SQOOP-3387 Include Column-Remarks

2018-10-11 Thread Tomas Sebastian Hätälä via Review Board

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/68989/
---

(Updated Okt. 11, 2018, 7:23 vorm.)


Review request for Sqoop.


Changes
---

add link to Sqoop jira


Bugs: SQOOP-3387
https://issues.apache.org/jira/browse/SQOOP-3387


Repository: sqoop-trunk


Description
---

In most RDBMS it is possible to enter comments/ remarks for table and view 
columns. That way a user can obtain additional information regarding the data 
and how to use it.

With the avro file format it would be possible to store this information in the 
schema file using the "doc"-tag. At the moment this is, however, left blanc.

This patch includes table and column remarks for Oracle DB and Avro


Diffs
-

  src/java/org/apache/sqoop/manager/ConnManager.java 4c1e8f5 
  src/java/org/apache/sqoop/manager/SqlManager.java d82332a 
  src/java/org/apache/sqoop/manager/oracle/OraOopConnManager.java 95eaacf 
  src/java/org/apache/sqoop/orm/AvroSchemaGenerator.java 7a2a5f9 
  src/java/org/apache/sqoop/orm/ClassWriter.java 46d0698 


Diff: https://reviews.apache.org/r/68989/diff/1/


Testing
---


Thanks,

Tomas Sebastian Hätälä



Review Request 68989: [SQOOP-3387 Include Column-Remarks

2018-10-11 Thread Tomas Sebastian Hätälä via Review Board

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/68989/
---

Review request for Sqoop.


Repository: sqoop-trunk


Description
---

In most RDBMS it is possible to enter comments/ remarks for table and view 
columns. That way a user can obtain additional information regarding the data 
and how to use it.

With the avro file format it would be possible to store this information in the 
schema file using the "doc"-tag. At the moment this is, however, left blanc.

This patch includes table and column remarks for Oracle DB and Avro


Diffs
-

  src/java/org/apache/sqoop/manager/ConnManager.java 4c1e8f5 
  src/java/org/apache/sqoop/manager/SqlManager.java d82332a 
  src/java/org/apache/sqoop/manager/oracle/OraOopConnManager.java 95eaacf 
  src/java/org/apache/sqoop/orm/AvroSchemaGenerator.java 7a2a5f9 
  src/java/org/apache/sqoop/orm/ClassWriter.java 46d0698 


Diff: https://reviews.apache.org/r/68989/diff/1/


Testing
---


Thanks,

Tomas Sebastian Hätälä



[jira] [Updated] (SQOOP-3387) Include Column-Remarks

2018-10-11 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SQOOP-3387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomas Sebastian Hätälä updated SQOOP-3387:
--
Attachment: (was: SQOOP_3387.patch)

> Include Column-Remarks
> --
>
> Key: SQOOP-3387
> URL: https://issues.apache.org/jira/browse/SQOOP-3387
> Project: Sqoop
>  Issue Type: Wish
>  Components: connectors, metastore
>Affects Versions: 1.4.7
>Reporter: Tomas Sebastian Hätälä
>Assignee: Tomas Sebastian Hätälä
>Priority: Critical
>  Labels: easy-fix, features, pull-request-available
> Fix For: 1.5.0
>
> Attachments: SQOOP_3387.patch
>
>
> In most RDBMS it is possible to enter comments/ remarks for table and view 
> columns. That way a user can obtain additional information regarding the data 
> and how to use it.
> With the avro file format it would be possible to store this information in 
> the schema file using the "doc"-tag. At the moment this is, however, left 
> blanc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SQOOP-3387) Include Column-Remarks

2018-10-11 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SQOOP-3387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomas Sebastian Hätälä updated SQOOP-3387:
--
Attachment: SQOOP_3387.patch

> Include Column-Remarks
> --
>
> Key: SQOOP-3387
> URL: https://issues.apache.org/jira/browse/SQOOP-3387
> Project: Sqoop
>  Issue Type: Wish
>  Components: connectors, metastore
>Affects Versions: 1.4.7
>Reporter: Tomas Sebastian Hätälä
>Assignee: Tomas Sebastian Hätälä
>Priority: Critical
>  Labels: easy-fix, features, pull-request-available
> Fix For: 1.5.0
>
> Attachments: SQOOP_3387.patch
>
>
> In most RDBMS it is possible to enter comments/ remarks for table and view 
> columns. That way a user can obtain additional information regarding the data 
> and how to use it.
> With the avro file format it would be possible to store this information in 
> the schema file using the "doc"-tag. At the moment this is, however, left 
> blanc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (SQOOP-3327) Mainframe FTP needs to Include "Migrated" datasets when parsing the FTP list

2018-10-11 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646023#comment-16646023
 ] 

Hudson commented on SQOOP-3327:
---

SUCCESS: Integrated in Jenkins build Sqoop-hadoop200 #1211 (See 
[https://builds.apache.org/job/Sqoop-hadoop200/1211/])
SQOOP-3327: Mainframe FTP needs to Include "Migrated" datasets when (vasas: 
[https://git-wip-us.apache.org/repos/asf?p=sqoop.git=commit=71523079bc61061867ced9b6a597150a3c72a964])
* (edit) 
src/java/org/apache/sqoop/mapreduce/mainframe/MainframeFTPFileEntryParser.java
* (edit) 
src/test/org/apache/sqoop/mapreduce/mainframe/TestMainframeFTPFileEntryParser.java


> Mainframe FTP needs to Include "Migrated" datasets when parsing the FTP list
> 
>
> Key: SQOOP-3327
> URL: https://issues.apache.org/jira/browse/SQOOP-3327
> Project: Sqoop
>  Issue Type: Improvement
>Reporter: Chris Teoh
>Assignee: Chris Teoh
>Priority: Minor
> Fix For: 3.0.0
>
>
> Need to Include "Migrated" datasets when parsing the FTP list.
>  
> ** This applies to sequential datasets as well as GDG members **
>  
> Identifying migrated datasets – when performing manual FTP
>  
> ftp> open abc.def.ghi.jkl.mno
> Connected to abc.def.ghi.jkl.mno (11.22.33.444).
> 220-TCPFTP01 Some FTP Server at abc.def.ghi.jkl.mno, 22:34:11 on 2018-01-22.
> 220 Connection will close if idle for more than 10 minutes.
> Name (abc.def.ghi.jkl.mno:some_user): some_user
> 331 Send password please.
> Password:
> 230 some_user is logged on.  Working directory is "some_user.".
> Remote system type is MVS.
> ftp> dir
> 227 Entering Passive Mode (33,44,555,66,7,8)
> 125 List started OK
> Volume Unit    Referred Ext Used Recfm Lrecl BlkSz Dsorg Dsname
> Migrated    DEV.DATA
> Migrated    DUMMY.DATA
> OVR343 3390   2018/01/23  1    1  FB 132 27984  PS  EMPTY
> Migrated    JCL.CNTL
> OVR346 3390   2018/01/22  1    1  FB  80 27920  PS  MIXED.FB80
> Migrated    PLAIN.FB80
> OVR341 3390   2018/01/23  1    9  VA 125   129  PS  PRDA.SPFLOG1.LIST
> G20427 Tape 
> UNLOAD.ABCDE.ZZ9UYT.FB.TAPE
> SEM352 3390   2018/01/23  1    1  FB 150  1500  PS  USER.BRODCAST
> OVR346 3390   2018/01/23  3    3  FB  80  6160  PO  USER.ISPPROF
> 250 List completed successfully.
>  
> "Migrated" should be included as one of the regex pattern searches.
> Assuming space delimited, first column will be "Migrated", and the second 
> (and final) column will contain the dataset name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (SQOOP-3327) Mainframe FTP needs to Include "Migrated" datasets when parsing the FTP list

2018-10-11 Thread Szabolcs Vasas (JIRA)


 [ 
https://issues.apache.org/jira/browse/SQOOP-3327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szabolcs Vasas resolved SQOOP-3327.
---
   Resolution: Fixed
Fix Version/s: 3.0.0

Hi [~chris.t...@gmail.com],

Your patch is now committed, thank you for your contribution!

Please resolve your review board request and upload the patch file to this Jira 
too.

Thanks and regards,

Szabolcs

> Mainframe FTP needs to Include "Migrated" datasets when parsing the FTP list
> 
>
> Key: SQOOP-3327
> URL: https://issues.apache.org/jira/browse/SQOOP-3327
> Project: Sqoop
>  Issue Type: Improvement
>Reporter: Chris Teoh
>Assignee: Chris Teoh
>Priority: Minor
> Fix For: 3.0.0
>
>
> Need to Include "Migrated" datasets when parsing the FTP list.
>  
> ** This applies to sequential datasets as well as GDG members **
>  
> Identifying migrated datasets – when performing manual FTP
>  
> ftp> open abc.def.ghi.jkl.mno
> Connected to abc.def.ghi.jkl.mno (11.22.33.444).
> 220-TCPFTP01 Some FTP Server at abc.def.ghi.jkl.mno, 22:34:11 on 2018-01-22.
> 220 Connection will close if idle for more than 10 minutes.
> Name (abc.def.ghi.jkl.mno:some_user): some_user
> 331 Send password please.
> Password:
> 230 some_user is logged on.  Working directory is "some_user.".
> Remote system type is MVS.
> ftp> dir
> 227 Entering Passive Mode (33,44,555,66,7,8)
> 125 List started OK
> Volume Unit    Referred Ext Used Recfm Lrecl BlkSz Dsorg Dsname
> Migrated    DEV.DATA
> Migrated    DUMMY.DATA
> OVR343 3390   2018/01/23  1    1  FB 132 27984  PS  EMPTY
> Migrated    JCL.CNTL
> OVR346 3390   2018/01/22  1    1  FB  80 27920  PS  MIXED.FB80
> Migrated    PLAIN.FB80
> OVR341 3390   2018/01/23  1    9  VA 125   129  PS  PRDA.SPFLOG1.LIST
> G20427 Tape 
> UNLOAD.ABCDE.ZZ9UYT.FB.TAPE
> SEM352 3390   2018/01/23  1    1  FB 150  1500  PS  USER.BRODCAST
> OVR346 3390   2018/01/23  3    3  FB  80  6160  PO  USER.ISPPROF
> 250 List completed successfully.
>  
> "Migrated" should be included as one of the regex pattern searches.
> Assuming space delimited, first column will be "Migrated", and the second 
> (and final) column will contain the dataset name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)