Re: VOTE: Move Apache Sqoop to attic
+1 from my end too Jarcec On Sat, May 15, 2021 at 11:26 PM Boglarka Egyed wrote: > Hi Venkat, > > Thanks for initiating the community survey and this vote thread. > > Based on the activity in the last couple of years, here is my +1 > > Regards, > Bogi > > Venkat ezt írta (időpont: 2021. máj. 15., Szo, > 1:42): > > > Dear Sqoop PMCs, > > > > More than a week ago, I sent an email [1] requesting suggestions for > > roadmap items and contributions from the Sqoop community. Since we > > have not been successful in eliciting roadmap or contribution feedback > > , I am proposing that we move the Apache Sqoop PMC to Apache Attic > > > > One of the requirements[2] in the process to move to the attic is that > > we conduct the PMC vote in the public dev list. I would like the > > PMCs to cast their votes in this thread. The voting will end on May > > 17th 2021 at 5PM PST. > > > > [+1] Move to Apache Attic > > [0] No objection/No opinion. > > [-1] Do NOT move to Apache Attic > > > > Here is my +1 > > > > Thanks > > > > Venkat > > [1] - https://s.apache.org/nvs0i > > [2] - https://attic.apache.org/process.html > > > -- Tento mail nemuze obsahovat viry, protoze proste nepouzivam windows ;-) [ http://septima.homeip.net ]
[ANNOUNCE] New Sqoop PMC member - Boglarka Egyed
On behalf of the Apache Sqoop PMC, I am excited to welcome Boglarka Egyed as new Sqoop PMC Member. Please join me in congratulating her! Jarcec
[jira] [Commented] (SQOOP-2331) Snappy Compression Support in Sqoop-HCatalog
[ https://issues.apache.org/jira/browse/SQOOP-2331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16573363#comment-16573363 ] Jarek Jarcec Cecho commented on SQOOP-2331: --- I'm no longer very active on the project [~standon] , so some more active committer will need to take a look. > Snappy Compression Support in Sqoop-HCatalog > > > Key: SQOOP-2331 > URL: https://issues.apache.org/jira/browse/SQOOP-2331 > Project: Sqoop > Issue Type: New Feature >Affects Versions: 1.4.7 >Reporter: Atul Gupta >Assignee: Shashank >Priority: Major > Fix For: 1.5.0 > > Attachments: SQOOP-2331_0.patch, SQOOP-2331_1.patch, > SQOOP-2331_2.patch, SQOOP-2331_2.patch > > > Current Apache Sqoop 1.4.7 does not compress in gzip format with > --compress option while using with --hcatalog-table option. It also does not > support option --compression-codec snappy with --hcatalog-table option. it > would be nice if we add both the options in the Sqoop future releases. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (SQOOP-3136) Sqoop should work well with not default file systems
[ https://issues.apache.org/jira/browse/SQOOP-3136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876757#comment-15876757 ] Jarek Jarcec Cecho commented on SQOOP-3136: --- I would recommend sending email to {{dev@sqoop.apache.org}} asking one of the active developers to take a look at the patch [~yalovyyi]. > Sqoop should work well with not default file systems > > > Key: SQOOP-3136 > URL: https://issues.apache.org/jira/browse/SQOOP-3136 > Project: Sqoop > Issue Type: Improvement > Components: connectors/hdfs >Affects Versions: 1.4.5 >Reporter: Illya Yalovyy >Assignee: Illya Yalovyy > Attachments: SQOOP-3136.patch > > > Currently Sqoop assumes default file system when it comes to IO operations. > It makes it hard to use other FileSystem implementations as source or > destination. Here is an example: > {code} > sqoop import --connect --table table1 --driver DRIVER> --username root --password --delete-target-dir --target-dir > s3a://some-bucket/tmp/sqoop > ... > 17/02/15 19:16:59 ERROR tool.ImportTool: Imported Failed: Wrong FS: > s3a://some-bucket/tmp/sqoop, expected: hdfs://:8020 > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (SQOOP-3136) Sqoop should work well with not default file systems
[ https://issues.apache.org/jira/browse/SQOOP-3136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871969#comment-15871969 ] Jarek Jarcec Cecho commented on SQOOP-3136: --- I've added you to the contributor list [~yalovyyi], you should be able to assign the ticket to yourself now. > Sqoop should work well with not default file systems > > > Key: SQOOP-3136 > URL: https://issues.apache.org/jira/browse/SQOOP-3136 > Project: Sqoop > Issue Type: Improvement > Components: connectors/hdfs >Affects Versions: 1.4.5 >Reporter: Illya Yalovyy > Attachments: SQOOP-3136.patch > > > Currently Sqoop assumes default file system when it comes to IO operations. > It makes it hard to use other FileSystem implementations as source or > destination. Here is an example: > {code} > sqoop import --connect --table table1 --driver DRIVER> --username root --password --delete-target-dir --target-dir > s3a://some-bucket/tmp/sqoop > ... > 17/02/15 19:16:59 ERROR tool.ImportTool: Imported Failed: Wrong FS: > s3a://some-bucket/tmp/sqoop, expected: hdfs://:8020 > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
Re: [ANNOUNCE] New Sqoop PMC member - Abe Fine
Congratulations Abe, well deserved! Jarcec > On Nov 9, 2016, at 11:00 PM, Kathleen Tingwrote: > > On behalf of the Apache Sqoop PMC, I am stoked to welcome Abe Fine as > a new Sqoop PMC Member. Please join me in congratulating him. > > On top of his code contributions[1], Abe’s contributions towards > growing the Sqoop community are even more important for PMC > membership. He's proven his commitment by driving the 1.99.7 release, > mentoring new contributors, and driving consensus in the community. > > We appreciate all of Abe's hard work thus far, and look forward to his > continued contributions. > > Best, > Kate > > Links: > 1: https://s.apache.org/MUNt
[jira] [Commented] (SQOOP-2983) OraOop export has degraded performance with wide tables
[ https://issues.apache.org/jira/browse/SQOOP-2983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15609395#comment-15609395 ] Jarek Jarcec Cecho commented on SQOOP-2983: --- Let's create a follow up JIRA for [~david.robson]'s feedback to merge the code paths and let's get this in to resolve the actual perf issue as that is negatively affecting our users. > OraOop export has degraded performance with wide tables > --- > > Key: SQOOP-2983 > URL: https://issues.apache.org/jira/browse/SQOOP-2983 > Project: Sqoop > Issue Type: Bug >Reporter: Attila Szabo >Assignee: Attila Szabo >Priority: Critical > Attachments: SQOOP-2983-5.patch, SQOOP-2983-6.patch, > SQOOP-2983-7.patch > > > The current version of OraOOP seems to perform very low from performance POV > when --direct mode turned on (regardless if the partitioned feature is turned > of). > Just as a baseline from the current trunk version: > Inserting 100.000 rows into a 800 column wide Oracle table has 400-600 kb/sec > with direct mode on my cluster, while the standard oracle driver can produce > up to 1.2-1.8 mb/sec. (depending on the number of mappers, batch size). > Inserting 1.000.000 rows into the same table goes up to 800k-1mb/sec with > OraOOP, however with the standard Oracle connector it's around 3.5mb/sec. > It seems OraOOP export needs a thorough review and some fixing. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-3018) Hadoop MapReduce job submission be done in client user UGI?
[ https://issues.apache.org/jira/browse/SQOOP-3018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576020#comment-15576020 ] Jarek Jarcec Cecho commented on SQOOP-3018: --- I believe that the right solution here is to use the YARN API that enables us to choose which queue the job should go to, similarly as HIVE did back in HIVE-8424. That should enable your use case the same way as impersonating the whole job (which is a security concern) [~fchn602]. > Hadoop MapReduce job submission be done in client user UGI? > --- > > Key: SQOOP-3018 > URL: https://issues.apache.org/jira/browse/SQOOP-3018 > Project: Sqoop > Issue Type: New Feature > Components: connectors/hdfs >Affects Versions: 1.99.7 >Reporter: Yan Braun > Attachments: SQOOP-3018.patch > > > Hdfs Connector read and write to HDFS in client user UGI when proxyUser is > enabled. But MapReduce job submission is done using Sqoop user UGI, which > makes all jobs from different users run in Sqoop user's hadoop queue instead > of client users' own queue. > This is a follow-up JIRA after our discussions with Abraham Fine on whether > this will be on sqoop2 road map in the near future. Thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-3018) Hadoop MapReduce job submission be done in client user UGI?
[ https://issues.apache.org/jira/browse/SQOOP-3018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566141#comment-15566141 ] Jarek Jarcec Cecho commented on SQOOP-3018: --- If the feature request here is to only run in different scheduler queue, then there is an API in YARN that allows you to do that. HiveServer2 is using the same API as they are also not impersonating jobs when Sentry/Ranger is used. > Hadoop MapReduce job submission be done in client user UGI? > --- > > Key: SQOOP-3018 > URL: https://issues.apache.org/jira/browse/SQOOP-3018 > Project: Sqoop > Issue Type: New Feature > Components: connectors/hdfs >Affects Versions: 1.99.7 >Reporter: Yan Braun > > Hdfs Connector read and write to HDFS in client user UGI when proxyUser is > enabled. But MapReduce job submission is done using Sqoop user UGI, which > makes all jobs from different users run in Sqoop user's hadoop queue instead > of client users' own queue. > This is a follow-up JIRA after our discussions with Abraham Fine on whether > this will be on sqoop2 road map in the near future. Thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-3018) Hadoop MapReduce job submission be done in client user UGI?
[ https://issues.apache.org/jira/browse/SQOOP-3018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15549455#comment-15549455 ] Jarek Jarcec Cecho commented on SQOOP-3018: --- If my memory serves me well, we did not want to impersonate the whole job as that would expose information that should be exposed. E.g. if malicious user that doesn't have credentials to given database - but have a privilege to use them in Sqoop 2 server through link object, he could potentially attach debugger to the impersonated process and get the credentials. Not impersonating the whole job, means that there is no such attack vector. I'm however not sure if that is still applicable to the current code base or not. > Hadoop MapReduce job submission be done in client user UGI? > --- > > Key: SQOOP-3018 > URL: https://issues.apache.org/jira/browse/SQOOP-3018 > Project: Sqoop > Issue Type: New Feature > Components: connectors/hdfs >Affects Versions: 1.99.7 >Reporter: Yan Braun > > Hdfs Connector read and write to HDFS in client user UGI when proxyUser is > enabled. But MapReduce job submission is done using Sqoop user UGI, which > makes all jobs from different users run in Sqoop user's hadoop queue instead > of client users' own queue. > This is a follow-up JIRA after our discussions with Abraham Fine on whether > this will be on sqoop2 road map in the near future. Thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2997) --password-file option triggers FileSystemClosed exception at end of Oozie action
[ https://issues.apache.org/jira/browse/SQOOP-2997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15431185#comment-15431185 ] Jarek Jarcec Cecho commented on SQOOP-2997: --- I've added you to the contributors list [~jordirodri] and you should be able to self-assign the JIRA to yourself now. Any PMC member should have privileges to new add contributors in our JIRA project. > --password-file option triggers FileSystemClosed exception at end of Oozie > action > - > > Key: SQOOP-2997 > URL: https://issues.apache.org/jira/browse/SQOOP-2997 > Project: Sqoop > Issue Type: Bug >Affects Versions: 1.4.6 > Environment: Java 1.8 + CDH5.5.1 >Reporter: Jordi > > When using the --password-file option triggers FileSystemClosed exception at > end of Oozie action. > This error was fixed for FilePasswordLoader class for sqoop-1.4.3, but it is > also happening at CryptoFileLoader which extends from the fixed one. > https://issues.apache.org/jira/browse/SQOOP-1226 > In this case we are using --password-file option with an encrypted file so we > need to use CryptoFileLoder. > Error LOG: > Job commit failed: java.io.IOException: Filesystem closed > at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:837) > at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1720) > at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1662) > at > org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:404) > > at > org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:400) > > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > > at > org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:400) > > at > org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:343) > > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:917) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:898) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:795) > at > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.touchz(CommitterEventHandler.java:265) > > at > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobCommit(CommitterEventHandler.java:271) > > at > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:237) > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > > at java.lang.Thread.run(Thread.java:745) > Average Map Time 57sec -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (SQOOP-2989) throw nullpointerexception
[ https://issues.apache.org/jira/browse/SQOOP-2989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho reassigned SQOOP-2989: - Assignee: happyziqi Done :) > throw nullpointerexception > -- > > Key: SQOOP-2989 > URL: https://issues.apache.org/jira/browse/SQOOP-2989 > Project: Sqoop > Issue Type: Bug > Components: tools >Reporter: happyziqi >Assignee: happyziqi > Labels: newbie > Fix For: no-release > > Attachments: nullPinter.patch > > > when the configurable parameter 'bindir' point at a common directory, > sqoop may throw a NullPointerException if file in that directory is been > deleted during the building jar stage -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: [VOTE] Release Sqoop version 1.99.7rc1
+1 Thanks for building second RC Abe! Jarcec > On Jul 19, 2016, at 4:25 PM, Abraham Finewrote: > > This is Sqoop 2, version 1.99.7, release candidate 1. The main purpose of > this release is to increasing the stability of the generic-jdbc-connector and > the hdfs-connector. > > The only difference between rc1 and rc0 is the removal of extra files that > were accidentally placed into the rc0 tarballs. > > *** Please cast your vote by Wednesday 2016-07-27 *** > > The list of fixed issues: > https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12329023=Html=12311320=Create_token=A5KQ-2QAV-T4JA-FDED%7C2b6155c10e0f3699f07ec360b18417d73d5736b7%7Clin > > The tarball (*.tar.gz), signature (*.asc), checksum (*.md5, *.sha): > https://dist.apache.org/repos/dist/dev/sqoop/1.99.7_rc1/ > > The tag to be voted upon: > https://git-wip-us.apache.org/repos/asf?p=sqoop.git;a=tag;h=refs/tags/release-1.99.7-rc1 > > The KEYS file: > http://www.apache.org/dist/sqoop/KEYS > > Thanks, > Abraham Fine
Re: [VOTE] Release Sqoop version 1.99.7
Thank you Abe! Jarcec > On Jul 15, 2016, at 10:45 AM, Abraham Fine <a...@abrahamfine.com> wrote: > > Good catch Jarcec. > > I seem to have left some stuff in there that i failed to notice due to having > them in my gitignore. > > I will cut another release that does not include all this extra stuff and > update the release documentation to make a note about this. > > Thanks, > Abe > >> On Jul 15, 2016, at 09:58, Jarek Jarcec Cecho <jar...@apache.org> wrote: >> >> Thanks for putting up the release candidate Abe, appreciated! >> >> Quickly taking a look, I do have couple of comments: >> >> * The source tarball contains a spinx_rtd_theme.tar and ojdbc6.jar files >> that I shouldn’t be there, right? >> * This one is probably not really a concern, but tool/lib contains testng >> jar that should’t be needed right? >> >> Otherwise I’ve build the source release and run tests there and validated >> both binary and source tarballs top level files (license, notice, …) and >> those all look good. >> >> Jarcec >> >>> On Jul 11, 2016, at 10:31 AM, Abraham Fine <a...@abrahamfine.com> wrote: >>> >>> This is Sqoop 2, version 1.99.7, release candidate 0. The main purpose of >>> this release is to increasing the stability of the generic-jdbc-connector >>> and the hdfs-connector. >>> >>> *** Please cast your vote by Friday 2016-07-15 *** >>> >>> The list of fixed issues: >>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12329023=Html=12311320=Create_token=A5KQ-2QAV-T4JA-FDED%7C2b6155c10e0f3699f07ec360b18417d73d5736b7%7Clin >>> >>> The tarball (*.tar.gz), signature (*.asc), checksum (*.md5, *.sha): >>> https://dist.apache.org/repos/dist/dev/sqoop/1.99.7_rc0/ >>> >>> The tag to be voted upon: >>> https://git-wip-us.apache.org/repos/asf?p=sqoop.git;a=tag;h=refs/tags/release-1.99.7-rc0 >>> >>> The KEYS file: >>> http://www.apache.org/dist/sqoop/KEYS >>> >>> Thanks, >>> Abraham Fine >> >
Re: [VOTE] Release Sqoop version 1.99.7
Thanks for putting up the release candidate Abe, appreciated! Quickly taking a look, I do have couple of comments: * The source tarball contains a spinx_rtd_theme.tar and ojdbc6.jar files that I shouldn’t be there, right? * This one is probably not really a concern, but tool/lib contains testng jar that should’t be needed right? Otherwise I’ve build the source release and run tests there and validated both binary and source tarballs top level files (license, notice, …) and those all look good. Jarcec > On Jul 11, 2016, at 10:31 AM, Abraham Finewrote: > > This is Sqoop 2, version 1.99.7, release candidate 0. The main purpose of > this release is to increasing the stability of the generic-jdbc-connector and > the hdfs-connector. > > *** Please cast your vote by Friday 2016-07-15 *** > > The list of fixed issues: > https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12329023=Html=12311320=Create_token=A5KQ-2QAV-T4JA-FDED%7C2b6155c10e0f3699f07ec360b18417d73d5736b7%7Clin > > The tarball (*.tar.gz), signature (*.asc), checksum (*.md5, *.sha): > https://dist.apache.org/repos/dist/dev/sqoop/1.99.7_rc0/ > > The tag to be voted upon: > https://git-wip-us.apache.org/repos/asf?p=sqoop.git;a=tag;h=refs/tags/release-1.99.7-rc0 > > The KEYS file: > http://www.apache.org/dist/sqoop/KEYS > > Thanks, > Abraham Fine
Re: Sqoop Import using Merge-Key Column And Sqoop Merge using Merge-Key
Hi Julio, it’s better to send such questions to dev@sqoop mailing list as the whole community can chime in and help you out. I’ve added the list now, however if you will need to follow up, you will need to sign up per our instructions at [1]. Jarcec Links: 1: http://sqoop.apache.org/mail-lists.html > On Jul 12, 2016, at 8:05 PM, Julio Bregeiro> wrote: > > Hi Jarcec, how are you? > > My name is Julio , and I belong to a big data architects team here in Brazil > , whose company has French origin. > We are working hard on the implementation of hadoop , and have encountered > some difficulties in some use cases, specifically when trying to make a " > Merge" in my HDFS but my table contains more than one column in the makeup of > the primary key. > I know that their activities should take you all the time , but if you have > some time to spare, could you tell me if there is any alternative in Sqoop > Import or Sqoop Merge where I can use more than one key column in the command > " --merge -key " > > Thank you in advance for your support > A big hug > > Julio > Julio Bregeiro > Arquiteto de Soluções Big Data & Business Intelligence > L.D. +55 (11) 5070-1400 (11) 99744-5332 Celular : (11) 99744-5332 > Keyrus - Av. Jabaquara, 1909, 12 andar - 04045-003 - São Paulo - Brazil > > > > > This message has been scanned for malware by Websense. www.websense.com >
[jira] [Commented] (SQOOP-2963) Update license file
[ https://issues.apache.org/jira/browse/SQOOP-2963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368243#comment-15368243 ] Jarek Jarcec Cecho commented on SQOOP-2963: --- +1, looks good to me! > Update license file > --- > > Key: SQOOP-2963 > URL: https://issues.apache.org/jira/browse/SQOOP-2963 > Project: Sqoop > Issue Type: Sub-task >Reporter: Abraham Fine >Assignee: Abraham Fine > Fix For: no-release, 1.99.7 > > Attachments: SQOOP-2963.patch > > > Update license files for 3rd libraries -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2956) Update change log with 1.99.7 release
[ https://issues.apache.org/jira/browse/SQOOP-2956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368240#comment-15368240 ] Jarek Jarcec Cecho commented on SQOOP-2956: --- +1 looks good to me! > Update change log with 1.99.7 release > - > > Key: SQOOP-2956 > URL: https://issues.apache.org/jira/browse/SQOOP-2956 > Project: Sqoop > Issue Type: Sub-task >Reporter: Abraham Fine >Assignee: Abraham Fine > Fix For: 1.99.7 > > Attachments: SQOOP-2956.patch > > > See the wiki for instructions and script on how to generate release notes: > https://cwiki.apache.org/confluence/display/SQOOP/How+to+Release+Sqoop2#HowtoReleaseSqoop2-Updatechangelogfile -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2953) Sqoop 1.99.7 release preparation
[ https://issues.apache.org/jira/browse/SQOOP-2953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368239#comment-15368239 ] Jarek Jarcec Cecho commented on SQOOP-2953: --- For a non-code related changes that are release specific (checking license file, updating change log), if the release manager is a committer, then there is no need to review those patches before committing and the release manager can commit them directly. This is fine, because PMC have to check those files as part of a voting on RCs and hence the review will actually happen just later in the cycle. > Sqoop 1.99.7 release preparation > > > Key: SQOOP-2953 > URL: https://issues.apache.org/jira/browse/SQOOP-2953 > Project: Sqoop > Issue Type: Bug >Reporter: Abraham Fine >Assignee: Abraham Fine > Fix For: no-release > > > Umbrella jira for 1.99.7 release. > For reference, the release wikis are: > https://cwiki.apache.org/confluence/display/SQOOP/How+to+Release > https://cwiki.apache.org/confluence/display/SQOOP/How+to+Release+Sqoop2 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Please look into SQOOP-2930 & SQOOP-1933
Hi Rabin, Sqoop project currently does not accept pull requests. You will need to generate patch and upload it to the JIRA as per our instructions here: https://cwiki.apache.org/confluence/display/SQOOP/How+to+Contribute I’ve also added dev mailing list so that whole development community can help you out :) You will need to join the mailing list if you want to respond to any email, instructions are here: http://sqoop.apache.org/mail-lists.html Jarcec > On Jul 4, 2016, at 10:23 AM, Rabin Banerjee> wrote: > > SQOOP-2930 & SQOOP-1933 > PR:: > https://github.com/apache/sqoop/pull/20 > > -- > Rabin Banerjee > >
Re: Sqoop question
Hi John, dev@sqoop.apache.org mailing list is the best place to ask such questions. I’ve added this mailing list now, but will need to sign up to the list in order to send messages there. Please check out instructions on our website [1] how to do that. Would you mind describing what exactly are you doing to get the exception? Also getting the full stack trace will be extremely helpful. Jarcec Links: 1: http://sqoop.apache.org/mail-lists.html > On Jun 20, 2016, at 1:31 PM, Johnwrote: > > Jarek, > > I continue to poke at the Sqoop project occasionally. I have recently > switched machines, and when I try to run some of the Sqoop tests I get > ClassNotFoundExceptions on org.apache.hadoop.yarn.YarnException. This has > happened to me in both Eclipse and IntelliJ. Do you have any suggestions on > how to resolve them? > > Thanks, > John Todd
Re: Sqoop 1.99.7
There doesn’t seems to be any objections, so let’s do the release. Thanks for volunteering to be the release manager Abe! I’m happy to be a release mentor since this will be your first release if that is fine with you. Jarcec > On May 20, 2016, at 10:37 AM, Abraham Fine <a...@abrahamfine.com> wrote: > > I would be open to driving the release. > > Abe > >> On May 20, 2016, at 08:20, Abraham Elmahrek <abra...@elmahrek.com> wrote: >> >> +1 for a new release. >> >> On Fri, May 20, 2016 at 7:08 AM Jarek Jarcec Cecho <jar...@apache.org> >> wrote: >> >>> Any volunteers to drive that release? :) >>> >>> Jarcec >>> >>>> On May 18, 2016, at 1:49 PM, Abraham Fine <a...@abrahamfine.com> wrote: >>>> >>>> I agree. >>>> >>>> >>>>> On May 18, 2016, at 11:24, Jarek Jarcec Cecho <jar...@apache.org> >>> wrote: >>>>> >>>>> +dev@sqoop >>>>> >>>>> I see plenty of new features available in the head of sqoop2 branch, so >>> perhaps it would make sense to do a release? >>>>> >>>>> Jarcec >>>>> >>>>>> On May 10, 2016, at 12:33 AM, ssvinarc...@cybervisiontech.com wrote: >>>>>> >>>>>> Hi all, >>>>>> >>>>>> Could somebody say when will be released Sqoop 1.99.7? >>>>>> >>>>>> Thanks, >>>>>> Sergey! >>>>> >>>> >>> >>> >
Re: Ivy local resolve
Hey Attila, is there a JIRA associated with it? Jarcec > On May 18, 2016, at 12:05 PM, Attila Szabo <asz...@cloudera.com> wrote: > > Hi, > > I think in my previous mail I've an invalid review ticket link. > > The proper one is this: > https://reviews.apache.org/r/47110/diff/1#index_header > > Sorry for the confusion! > > Cheers, > M. > > On Tue, May 10, 2016 at 9:02 PM, Attila Szabo <asz...@cloudera.com> wrote: > >> Hi all, >> >> Do you have got any comments on this? >> >> Thanks, >> M. >> >> On Mon, May 9, 2016 at 9:27 AM, Attila Szabo <asz...@cloudera.com> wrote: >> >>> Hey Jarcec, >>> >>> Sorry for this, but I've forgot this one... >>> >>> Please find my proposed changes here: >>> >>> https://reviews.apache.org/r/47108/diff/1#index_header >>> >>> Please also review it when you'll have time for that! >>> >>> Cheers, >>> Maugli >>> >>> On Thu, Apr 21, 2016 at 4:30 PM, Jarek Jarcec Cecho <jar...@apache.org> >>> wrote: >>> >>>> Hi Attila, >>>> thank you for looking into how to make Sqoop 1 compilation faster. I’m >>>> also in the camp of devs who is affected by the turn around of at least 3 >>>> minutes for one simple change :( I’m sadly not an ivy expert either, but >>>> I’m sure that we can do some staged approach to make at least option to >>>> build faster. Could you open a JIRA and attach the patch there? Sadly our >>>> mailing lists doesn’t allow attachments so the patch did not made it to >>>> list. >>>> >>>> Jarcec >>>> >>>>> On Apr 18, 2016, at 4:26 AM, Attila Szabo <asz...@cloudera.com> wrote: >>>>> >>>>> Hi all, >>>>> >>>>> First let me introduce myself to the community: >>>>> I'm Attila Szabo. I'm a software engineer at Cloudera since Oct. 2015, >>>> and I've just recently (March this year) started to contribute Sqoop. So as >>>> you can see I'm quite new in this community, but also I'm very enthusiastic >>>> to join the Sqoop development. >>>>> >>>>> I'd like to ask a question about Ivy resolve, and how to make it >>>> affective. >>>>> I have to highlight that I'm not an ant or ivy expert, so maybe this >>>> is possible I've missed something! >>>>> >>>>> However I've faced the following issue on my dev pc: >>>>> >>>>> Every ant operation is very slow at me, because regardless I have the >>>> artifacts in my local ivy cache or not, it goes to the maven2 repo to check >>>> something connected to the resolve process. It doesn't download anything, >>>> as I've already got the dependencies, however this process is still quite >>>> slow (on my home network for example it could take 2-5 minutes). >>>>> >>>>> I've seen that it looks for the dependencies in the local .m2 >>>> repository, but usually I do not have those artifacts in my local .m2 and >>>> also for me it would make sense to have a maven independent solution. >>>>> >>>>> So I've read a few things about ivy resolver, and got a workaround >>>> (details in the attached patch file), which provides another FS related >>>> resolver points to my local ivy cache, and that gives the required >>>> performance for me (10 seconds max the resolve phase, once I've downloaded >>>> all the dependencies). >>>>> >>>>> My questions are the following: >>>>> • Is it a valid solution, or did I make any fundamental mistakes? >>>>> • If the ivy cache related way is not preferred (by any reason), >>>> is there any EZ to access and run solution to have the related artifacts >>>> installed/downloaded in my local maven repo. >>>>> • Is there any other way to have a fast resolution without >>>> "hacking" around the ivysettings or the local maven repo (it is possible >>>> I've just missed an ant task, or something in the docs). >>>>> Many thanks for the help, >>>>> >>>>> >>>>> -- >>>>> Best regards, >>>>> >>>>> Attila Szabo >>>>> Sotware Engineer >>>>> >>>>> >>>> >>>> >>> >>> >>> -- >>> Best regards, >>> >>> Attila Szabo >>> Sotware Engineer >>> >>> <http://www.cloudera.com> >>> >> >> >> >> -- >> Best regards, >> >> Attila Szabo >> Sotware Engineer >> >> <http://www.cloudera.com> >> > > > > -- > Best regards, > > Attila Szabo > Sotware Engineer > > <http://www.cloudera.com>
[jira] [Updated] (SQOOP-2923) Sqoop2: Reword documentation to make it clear that the api endpoints fall under /sqoop
[ https://issues.apache.org/jira/browse/SQOOP-2923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho updated SQOOP-2923: -- Fix Version/s: 1.99.7 > Sqoop2: Reword documentation to make it clear that the api endpoints fall > under /sqoop > -- > > Key: SQOOP-2923 > URL: https://issues.apache.org/jira/browse/SQOOP-2923 > Project: Sqoop > Issue Type: Bug >Affects Versions: 1.99.6 >Reporter: Abraham Fine >Assignee: Abraham Fine > Fix For: 1.99.7 > > Attachments: SQOOP-2923.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Sqoop branch 1.5.x
I like the proposal and I would second it. > 1. Deprecate support for Hadoop 1 and older versions of HBase (only > support 1.0+) and Hive (only support 1.0+) I would even suggest to be more extreme and rather then “deprecating” I would directly remove that support. Jarcec > On May 18, 2016, at 11:31 AM, Venkat Ranganathan >wrote: > > Proposal for Sqoop 1.5 > > We have Sqoop 1.4.x going on which is the production version of Sqoop, with > support for ancient versions for Hadoop (from 0.20), Hive 0.7+ and HBase 0.94 > among others. > > There is a good amount of interest in contribution to Sqoop 1 as it is the > current production version. But Sqoop has a few issues that make Hadoop 1.x > is causing issues in bringing new features easily into Sqoop 1.x (for > example getting Phoenix changes into Sqoop and potentially others waiting in > the wings) > > Also, we have been using Ant/Ivy based project, which is causing issues with > component version management. We can potentially use a Maven profile based > configuration to easily allow multiple component versions to have more > flexibility in builds and packaging and how we publish artifacts > > To that end here is what I propose (had a brief discussion with Jarcec last > week) in the order of priority > > Create a new Sqoop 1.5 branch where we > > > 1. Deprecate support for Hadoop 1 and older versions of HBase (only > support 1.0+) and Hive (only support 1.0+) > > 2. Mavenize the project > > 3. Clean up the package jumble in the code – only have org.apache.sqoop > packages > > 4. Bring in all the new features that otherwise are difficult to bring in > with older > > What should we do with 1.4.x branch? My initial thought is that we do a > 1.4.7 release with what is available and have 1.5.x as the branch to make > further changes. > > Thoughts? > > Thanks > > Venkat
Re: Sqoop 1.99.7
+dev@sqoop I see plenty of new features available in the head of sqoop2 branch, so perhaps it would make sense to do a release? Jarcec > On May 10, 2016, at 12:33 AM, ssvinarc...@cybervisiontech.com wrote: > > Hi all, > > Could somebody say when will be released Sqoop 1.99.7? > > Thanks, > Sergey!
[jira] [Commented] (SQOOP-2903) Add Kudu connector for Sqoop
[ https://issues.apache.org/jira/browse/SQOOP-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289499#comment-15289499 ] Jarek Jarcec Cecho commented on SQOOP-2903: --- Can you upload the patch to [review board|https://reviews.apache.org]? It's quite sizable so it would be useful to have it there. > Add Kudu connector for Sqoop > > > Key: SQOOP-2903 > URL: https://issues.apache.org/jira/browse/SQOOP-2903 > Project: Sqoop > Issue Type: Improvement > Components: connectors >Reporter: Sameer Abhyankar >Assignee: Sameer Abhyankar > Attachments: SQOOP-2903.1.patch, SQOOP-2903.patch > > > Sqoop currently does not have a connector for Kudu. We should add the > functionality to allow Sqoop to ingest data directly into Kudu. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2906) Optimization of AvroUtil.toAvroIdentifier
[ https://issues.apache.org/jira/browse/SQOOP-2906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15258962#comment-15258962 ] Jarek Jarcec Cecho commented on SQOOP-2906: --- Could you upload the patch to the [review board|https://reviews.apache.org/dashboard/] [~joeri.hermans]? I do have some comments that will better shared directly in the code :) > Optimization of AvroUtil.toAvroIdentifier > - > > Key: SQOOP-2906 > URL: https://issues.apache.org/jira/browse/SQOOP-2906 > Project: Sqoop > Issue Type: Improvement >Reporter: Joeri Hermans >Assignee: Joeri Hermans > Labels: avro, hadoop, optimization > Attachments: diff.txt > > > Hi all > Our distributed profiler indicated some inefficiencies in the > AvroUtil.toAvroIdentifier method, more specifically, the use of Regex > patterns. This can be directly observed from the FlameGraph generated by this > profiler (https://jhermans.web.cern.ch/jhermans/sqoop_avro_flamegraph.svg). > We implemented an optimization, and compared this with the original method. > On our testing machine, the optimization by itself is about 500% (on average) > more efficient compared to the original implementation. We have yet to test > how this optimization will influence the performance of user jobs. > Any suggestions or remarks are welcome. > Kind regards, > Joeri > https://github.com/apache/sqoop/pull/18 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (SQOOP-2915) Fixing Oracle related unit tests
[ https://issues.apache.org/jira/browse/SQOOP-2915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho resolved SQOOP-2915. --- Resolution: Fixed Fix Version/s: 1.4.7 Thank you for your contribution [~maugli]! > Fixing Oracle related unit tests > > > Key: SQOOP-2915 > URL: https://issues.apache.org/jira/browse/SQOOP-2915 > Project: Sqoop > Issue Type: Bug >Reporter: Attila Szabo >Assignee: Attila Szabo > Fix For: 1.4.7 > > Attachments: SQOOP-2915.patch > > > Quite the same as SQOOP-2909 (has the same root cause). > src/test/com/cloudera/sqoop/manager/OracleExportTest.java > src/test/com/cloudera/sqoop/manager/OracleUtils.java > src/test/org/apache/sqoop/manager/oracle/ExportTest.java > src/test/org/apache/sqoop/manager/oracle/OracleCallExportTest.java > src/test/org/apache/sqoop/manager/oracle/TimestampDataTest.java > still failing -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2496) Sqoop2: Provide a way to inject external connectors
[ https://issues.apache.org/jira/browse/SQOOP-2496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252047#comment-15252047 ] Jarek Jarcec Cecho commented on SQOOP-2496: --- Please use {{git diff}} to create diff with your changes [~ge.bugman]. We have quite old, but hopefully still relevant [How to contribute guide|https://cwiki.apache.org/confluence/display/SQOOP/How+to+Contribute], so you might want to take a look :) > Sqoop2: Provide a way to inject external connectors > --- > > Key: SQOOP-2496 > URL: https://issues.apache.org/jira/browse/SQOOP-2496 > Project: Sqoop > Issue Type: Bug >Affects Versions: 1.99.6 > Reporter: Jarek Jarcec Cecho > Fix For: 1.99.7 > > Attachments: MapreduceSubmissionEngine.java > > > On internal hackathon we we're hacking Sqoop 2 connector with [~singhashish] > and we went through few troubles that we should address. > We have a a [configuration > property|https://github.com/apache/sqoop/blob/sqoop2/dist/src/main/server/conf/sqoop.properties#L173] > for extra directory from which we will load jar files. We were able to use > this configuration property to load our hacked connector to Server, but we > were not able to get it working through job submission. Here is the exception > that we hit: > {code} > java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative > path in absolute URI: > jar:file://var/lib/sqoop/connectors/Connector-1.0-SNAPSHOT.jar! > at org.apache.hadoop.fs.Path.initialize(Path.java:206) > at org.apache.hadoop.fs.Path.(Path.java:172) > at > org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:215) > at > org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:390) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:483) > at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1306) > at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1303) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1303) > at > org.apache.sqoop.submission.mapreduce.MapreduceSubmissionEngine.submitToCluster(MapreduceSubmissionEngine.java:274) > at > org.apache.sqoop.submission.mapreduce.MapreduceSubmissionEngine.submit(MapreduceSubmissionEngine.java:255) > at org.apache.sqoop.driver.JobManager.start(JobManager.java:288) > at > org.apache.sqoop.handler.JobRequestHandler.startJob(JobRequestHandler.java:380) > at > org.apache.sqoop.handler.JobRequestHandler.handleEvent(JobRequestHandler.java:116) > at > org.apache.sqoop.server.v1.JobServlet.handlePutRequest(JobServlet.java:96) > at > org.apache.sqoop.server.SqoopProtocolServlet.doPut(SqoopProtocolServlet.java:79) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:646) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:723) > {code} > To put into nutshell, > [ClassUtils.jarForClass|https://github.com/apache/sqoop/blob/sqoop2/common/src/main/java/org/apache/sqoop/utils/ClassUtils.java#L136] > returns for external connectors path starting with prefix {{jar:file}} and > suffix {{!}} that breaks mapreduce code. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2496) Sqoop2: Provide a way to inject external connectors
[ https://issues.apache.org/jira/browse/SQOOP-2496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252016#comment-15252016 ] Jarek Jarcec Cecho commented on SQOOP-2496: --- Can you please create a textual patch and attach ti to to the JIRA [~ge.bugman]? > Sqoop2: Provide a way to inject external connectors > --- > > Key: SQOOP-2496 > URL: https://issues.apache.org/jira/browse/SQOOP-2496 > Project: Sqoop > Issue Type: Bug >Affects Versions: 1.99.6 > Reporter: Jarek Jarcec Cecho > Fix For: 1.99.7 > > > On internal hackathon we we're hacking Sqoop 2 connector with [~singhashish] > and we went through few troubles that we should address. > We have a a [configuration > property|https://github.com/apache/sqoop/blob/sqoop2/dist/src/main/server/conf/sqoop.properties#L173] > for extra directory from which we will load jar files. We were able to use > this configuration property to load our hacked connector to Server, but we > were not able to get it working through job submission. Here is the exception > that we hit: > {code} > java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative > path in absolute URI: > jar:file://var/lib/sqoop/connectors/Connector-1.0-SNAPSHOT.jar! > at org.apache.hadoop.fs.Path.initialize(Path.java:206) > at org.apache.hadoop.fs.Path.(Path.java:172) > at > org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:215) > at > org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:390) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:483) > at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1306) > at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1303) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1303) > at > org.apache.sqoop.submission.mapreduce.MapreduceSubmissionEngine.submitToCluster(MapreduceSubmissionEngine.java:274) > at > org.apache.sqoop.submission.mapreduce.MapreduceSubmissionEngine.submit(MapreduceSubmissionEngine.java:255) > at org.apache.sqoop.driver.JobManager.start(JobManager.java:288) > at > org.apache.sqoop.handler.JobRequestHandler.startJob(JobRequestHandler.java:380) > at > org.apache.sqoop.handler.JobRequestHandler.handleEvent(JobRequestHandler.java:116) > at > org.apache.sqoop.server.v1.JobServlet.handlePutRequest(JobServlet.java:96) > at > org.apache.sqoop.server.SqoopProtocolServlet.doPut(SqoopProtocolServlet.java:79) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:646) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:723) > {code} > To put into nutshell, > [ClassUtils.jarForClass|https://github.com/apache/sqoop/blob/sqoop2/common/src/main/java/org/apache/sqoop/utils/ClassUtils.java#L136] > returns for external connectors path starting with prefix {{jar:file}} and > suffix {{!}} that breaks mapreduce code. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Ivy local resolve
Hi Attila, thank you for looking into how to make Sqoop 1 compilation faster. I’m also in the camp of devs who is affected by the turn around of at least 3 minutes for one simple change :( I’m sadly not an ivy expert either, but I’m sure that we can do some staged approach to make at least option to build faster. Could you open a JIRA and attach the patch there? Sadly our mailing lists doesn’t allow attachments so the patch did not made it to list. Jarcec > On Apr 18, 2016, at 4:26 AM, Attila Szabowrote: > > Hi all, > > First let me introduce myself to the community: > I'm Attila Szabo. I'm a software engineer at Cloudera since Oct. 2015, and > I've just recently (March this year) started to contribute Sqoop. So as you > can see I'm quite new in this community, but also I'm very enthusiastic to > join the Sqoop development. > > I'd like to ask a question about Ivy resolve, and how to make it affective. > I have to highlight that I'm not an ant or ivy expert, so maybe this is > possible I've missed something! > > However I've faced the following issue on my dev pc: > > Every ant operation is very slow at me, because regardless I have the > artifacts in my local ivy cache or not, it goes to the maven2 repo to check > something connected to the resolve process. It doesn't download anything, as > I've already got the dependencies, however this process is still quite slow > (on my home network for example it could take 2-5 minutes). > > I've seen that it looks for the dependencies in the local .m2 repository, but > usually I do not have those artifacts in my local .m2 and also for me it > would make sense to have a maven independent solution. > > So I've read a few things about ivy resolver, and got a workaround (details > in the attached patch file), which provides another FS related resolver > points to my local ivy cache, and that gives the required performance for me > (10 seconds max the resolve phase, once I've downloaded all the dependencies). > > My questions are the following: > • Is it a valid solution, or did I make any fundamental mistakes? > • If the ivy cache related way is not preferred (by any reason), is > there any EZ to access and run solution to have the related artifacts > installed/downloaded in my local maven repo. > • Is there any other way to have a fast resolution without "hacking" > around the ivysettings or the local maven repo (it is possible I've just > missed an ant task, or something in the docs). > Many thanks for the help, > > > -- > Best regards, > > Attila Szabo > Sotware Engineer > >
[jira] [Assigned] (SQOOP-2906) Optimization of AvroUtil.toAvroIdentifier
[ https://issues.apache.org/jira/browse/SQOOP-2906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho reassigned SQOOP-2906: - Assignee: Joeri Hermans > Optimization of AvroUtil.toAvroIdentifier > - > > Key: SQOOP-2906 > URL: https://issues.apache.org/jira/browse/SQOOP-2906 > Project: Sqoop > Issue Type: Improvement >Reporter: Joeri Hermans >Assignee: Joeri Hermans > Labels: avro, hadoop, optimization > > Hi all > Our distributed profiler indicated some inefficiencies in the > AvroUtil.toAvroIdentifier method, more specifically, the use of Regex > patterns. This can be directly observed from the FlameGraph generated by this > profiler (https://jhermans.web.cern.ch/jhermans/sqoop_avro_flamegraph.svg). > We implemented an optimization, and compared this with the original method. > On our testing machine, the optimization by itself is about 500% (on average) > more efficient compared to the original implementation. We have yet to test > how this optimization will influence the performance of user jobs. > Any suggestions or remarks are welcome. > Kind regards, > Joeri > https://github.com/apache/sqoop/pull/18 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2906) Optimization of AvroUtil.toAvroIdentifier
[ https://issues.apache.org/jira/browse/SQOOP-2906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251891#comment-15251891 ] Jarek Jarcec Cecho commented on SQOOP-2906: --- Hi [~joeri.hermans], thank you very much for picking up this improvement. Sadly the Sqoop project currently do not accept github pull request. Could you please create a text patch (diff between current HEAD and your latest changes) and attach it to the JIRA? Please also upload the patch to [review board|https://reviews.apache.org/dashboard/]. > Optimization of AvroUtil.toAvroIdentifier > - > > Key: SQOOP-2906 > URL: https://issues.apache.org/jira/browse/SQOOP-2906 > Project: Sqoop > Issue Type: Improvement >Reporter: Joeri Hermans > Labels: avro, hadoop, optimization > > Hi all > Our distributed profiler indicated some inefficiencies in the > AvroUtil.toAvroIdentifier method, more specifically, the use of Regex > patterns. This can be directly observed from the FlameGraph generated by this > profiler (https://jhermans.web.cern.ch/jhermans/sqoop_avro_flamegraph.svg). > We implemented an optimization, and compared this with the original method. > On our testing machine, the optimization by itself is about 500% (on average) > more efficient compared to the original implementation. We have yet to test > how this optimization will influence the performance of user jobs. > Any suggestions or remarks are welcome. > Kind regards, > Joeri > https://github.com/apache/sqoop/pull/18 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (SQOOP-2909) Oracle related ImportTest fails after SQOOP-2737
[ https://issues.apache.org/jira/browse/SQOOP-2909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho reassigned SQOOP-2909: - Assignee: Attila Szabo (was: Jarek Jarcec Cecho) > Oracle related ImportTest fails after SQOOP-2737 > > > Key: SQOOP-2909 > URL: https://issues.apache.org/jira/browse/SQOOP-2909 > Project: Sqoop > Issue Type: Bug >Reporter: Attila Szabo >Assignee: Attila Szabo > Fix For: 1.4.7 > > Attachments: SQOOP-2909.patch > > > After SQOOP-2737 had been implemented, Oracle ImportTest started to fail. > SOOP-2737 aimed to support special characters (like whitespaces) in the name > of the Oracle tables and columns. However the implementation works perfectly, > the related test cases had not been changed, and as after escaping+quoting > the underlying names become case sensitive, the whole ImporTest test suite > started to fail -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (SQOOP-2909) Oracle related ImportTest fails after SQOOP-2737
[ https://issues.apache.org/jira/browse/SQOOP-2909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho reassigned SQOOP-2909: - Assignee: Attila Szabo (was: Jarek Jarcec Cecho) > Oracle related ImportTest fails after SQOOP-2737 > > > Key: SQOOP-2909 > URL: https://issues.apache.org/jira/browse/SQOOP-2909 > Project: Sqoop > Issue Type: Bug >Reporter: Attila Szabo >Assignee: Attila Szabo > Fix For: 1.4.7 > > Attachments: SQOOP-2909.patch > > > After SQOOP-2737 had been implemented, Oracle ImportTest started to fail. > SOOP-2737 aimed to support special characters (like whitespaces) in the name > of the Oracle tables and columns. However the implementation works perfectly, > the related test cases had not been changed, and as after escaping+quoting > the underlying names become case sensitive, the whole ImporTest test suite > started to fail -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2908) Sqoop2: Increase maximal length for jdbc connection strings
[ https://issues.apache.org/jira/browse/SQOOP-2908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15241648#comment-15241648 ] Jarek Jarcec Cecho commented on SQOOP-2908: --- It's a simple annotation change, so I'm +1 even without precommit hook. > Sqoop2: Increase maximal length for jdbc connection strings > --- > > Key: SQOOP-2908 > URL: https://issues.apache.org/jira/browse/SQOOP-2908 > Project: Sqoop > Issue Type: Bug >Affects Versions: 1.99.6 >Reporter: Abraham Fine >Assignee: Abraham Fine > Fix For: 1.99.7 > > Attachments: SQOOP-2908.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2496) Sqoop2: Provide a way to inject external connectors
[ https://issues.apache.org/jira/browse/SQOOP-2496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15235240#comment-15235240 ] Jarek Jarcec Cecho commented on SQOOP-2496: --- Hi [~ge.bugman], this bug is still open, so I'm afraid that it wasn't fixed yet. There is a small chance that the issue might have been fixed as a side effect of refactoring the way we're loading connector classes in SQOOP-2574, but I haven't tried it myself yet. Nevertheless SQOOP-2574 is only in {{sqoop2}} branch, so you would have to compile the branch yourself (it hasn't been released yet). I'm wondering, what custom connector are you using? > Sqoop2: Provide a way to inject external connectors > --- > > Key: SQOOP-2496 > URL: https://issues.apache.org/jira/browse/SQOOP-2496 > Project: Sqoop > Issue Type: Bug >Affects Versions: 1.99.6 > Reporter: Jarek Jarcec Cecho > Fix For: 1.99.7 > > > On internal hackathon we we're hacking Sqoop 2 connector with [~singhashish] > and we went through few troubles that we should address. > We have a a [configuration > property|https://github.com/apache/sqoop/blob/sqoop2/dist/src/main/server/conf/sqoop.properties#L173] > for extra directory from which we will load jar files. We were able to use > this configuration property to load our hacked connector to Server, but we > were not able to get it working through job submission. Here is the exception > that we hit: > {code} > java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative > path in absolute URI: > jar:file://var/lib/sqoop/connectors/Connector-1.0-SNAPSHOT.jar! > at org.apache.hadoop.fs.Path.initialize(Path.java:206) > at org.apache.hadoop.fs.Path.(Path.java:172) > at > org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:215) > at > org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:390) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:483) > at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1306) > at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1303) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1303) > at > org.apache.sqoop.submission.mapreduce.MapreduceSubmissionEngine.submitToCluster(MapreduceSubmissionEngine.java:274) > at > org.apache.sqoop.submission.mapreduce.MapreduceSubmissionEngine.submit(MapreduceSubmissionEngine.java:255) > at org.apache.sqoop.driver.JobManager.start(JobManager.java:288) > at > org.apache.sqoop.handler.JobRequestHandler.startJob(JobRequestHandler.java:380) > at > org.apache.sqoop.handler.JobRequestHandler.handleEvent(JobRequestHandler.java:116) > at > org.apache.sqoop.server.v1.JobServlet.handlePutRequest(JobServlet.java:96) > at > org.apache.sqoop.server.SqoopProtocolServlet.doPut(SqoopProtocolServlet.java:79) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:646) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:723) > {code} > To put into nutshell, > [ClassUtils.jarForClass|https://github.com/apache/sqoop/blob/sqoop2/common/src/main/java/org/apache/sqoop/utils/ClassUtils.java#L136] > returns for external connectors path starting with prefix {{jar:file}} and > suffix {{!}} that breaks mapreduce code. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (SQOOP-2903) Add Kudu connector for Sqoop
[ https://issues.apache.org/jira/browse/SQOOP-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho reassigned SQOOP-2903: - Assignee: Sameer Abhyankar Assigning to you [~sabhyankar] :) > Add Kudu connector for Sqoop > > > Key: SQOOP-2903 > URL: https://issues.apache.org/jira/browse/SQOOP-2903 > Project: Sqoop > Issue Type: Improvement > Components: connectors >Reporter: Sameer Abhyankar >Assignee: Sameer Abhyankar > > Sqoop currently does not have a connector for Kudu. We should add the > functionality to allow Sqoop to ingest data directly into Kudu. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2561) Special Character removal from Column name as avro data results in duplicate column and fails the import
[ https://issues.apache.org/jira/browse/SQOOP-2561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214386#comment-15214386 ] Jarek Jarcec Cecho commented on SQOOP-2561: --- Left one comment on the review board [~vishnusn]. > Special Character removal from Column name as avro data results in duplicate > column and fails the import > > > Key: SQOOP-2561 > URL: https://issues.apache.org/jira/browse/SQOOP-2561 > Project: Sqoop > Issue Type: Bug >Affects Versions: 1.4.6 > Environment: cdh5.3.2 >Reporter: Suresh >Assignee: VISHNU S NAIR > Labels: AVRO, SQOOP > Fix For: 1.4.7 > > Attachments: 0001-SQOOP-2561.patch > > > When a Special character like '$' or '#' are present in column name, > sqoop/avro removes those special character. In some cases it leads to > duplicate column. > e.g. If we have COL$1 and COL1$ in the schema, it removes both of them and > creates the duplicate column as COL1 and it results in failure of the SQOOP > import job as a avro data. The same table can be loaded without > --as-avarodata flag. > The similar issue is raised in, > https://issues.apache.org/jira/browse/SQOOP-1361 - which i suppose is fixed > and the fix is creating this new issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (SQOOP-2881) Sqoop2: EnrichOraOop Connector resource file
[ https://issues.apache.org/jira/browse/SQOOP-2881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho updated SQOOP-2881: -- Attachment: SQOOP-2881.patch > Sqoop2: EnrichOraOop Connector resource file > > > Key: SQOOP-2881 > URL: https://issues.apache.org/jira/browse/SQOOP-2881 > Project: Sqoop > Issue Type: Sub-task > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > Attachments: SQOOP-2881.patch, SQOOP-2881.patch > > > Please see parent JIRA for details. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2857) Sqoop2: toParquetTest fails frequently on the pre-commit hook
[ https://issues.apache.org/jira/browse/SQOOP-2857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204717#comment-15204717 ] Jarek Jarcec Cecho commented on SQOOP-2857: --- The hook is a bit unstable now, so I'll go ahead and commit this one nevertheless. > Sqoop2: toParquetTest fails frequently on the pre-commit hook > - > > Key: SQOOP-2857 > URL: https://issues.apache.org/jira/browse/SQOOP-2857 > Project: Sqoop > Issue Type: Bug >Affects Versions: 1.99.6 >Reporter: Abraham Fine >Assignee: Abraham Fine > Attachments: SQOOP-2857.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2857) Sqoop2: toParquetTest fails frequently on the pre-commit hook
[ https://issues.apache.org/jira/browse/SQOOP-2857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204365#comment-15204365 ] Jarek Jarcec Cecho commented on SQOOP-2857: --- I'll restart the hook once more. > Sqoop2: toParquetTest fails frequently on the pre-commit hook > - > > Key: SQOOP-2857 > URL: https://issues.apache.org/jira/browse/SQOOP-2857 > Project: Sqoop > Issue Type: Bug >Affects Versions: 1.99.6 >Reporter: Abraham Fine >Assignee: Abraham Fine > Attachments: SQOOP-2857.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2888) Test case ShowCommandTest is failing after SQOOP-2848
[ https://issues.apache.org/jira/browse/SQOOP-2888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204381#comment-15204381 ] Jarek Jarcec Cecho commented on SQOOP-2888: --- The logs has been rolled away, so I'll restart the hook. I feel that this change should not affect Kafka, but I can't verify that without logs. > Test case ShowCommandTest is failing after SQOOP-2848 > - > > Key: SQOOP-2888 > URL: https://issues.apache.org/jira/browse/SQOOP-2888 > Project: Sqoop > Issue Type: Bug > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > Attachments: SQOOP-2888.patch > > > The problem is that our client calls {{/v1/job?cname=generic-jdbc-connector}} > when the change in SQOOP-2848 made this invalid and only > {{/v1/job/all?cname=generic-jdbc-connector}} is allowed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2887) Sqoop2: Encrypt sensitive information in the repository
[ https://issues.apache.org/jira/browse/SQOOP-2887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204289#comment-15204289 ] Jarek Jarcec Cecho commented on SQOOP-2887: --- +1 on the feature and design proposal. I believe that this is a must have for users who will be running Sqoop 2 in environments where security is important. > Sqoop2: Encrypt sensitive information in the repository > --- > > Key: SQOOP-2887 > URL: https://issues.apache.org/jira/browse/SQOOP-2887 > Project: Sqoop > Issue Type: New Feature >Affects Versions: 1.99.6 >Reporter: Abraham Fine >Assignee: Abraham Fine > Attachments: SQOOP-2887.patch, > Sqoop2RepositorySensitiveDataEncryption-Upstream.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (SQOOP-2888) Test case ShowCommandTest is failing after SQOOP-2848
Jarek Jarcec Cecho created SQOOP-2888: - Summary: Test case ShowCommandTest is failing after SQOOP-2848 Key: SQOOP-2888 URL: https://issues.apache.org/jira/browse/SQOOP-2888 Project: Sqoop Issue Type: Bug Reporter: Jarek Jarcec Cecho Assignee: Jarek Jarcec Cecho Fix For: 1.99.7 Attachments: SQOOP-2888.patch The problem is that our client calls {{/v1/job?cname=generic-jdbc-connector}} when the change in SQOOP-2848 made this invalid and only{{/v1/job/all?cname=generic-jdbc-connector}} is allowed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (SQOOP-2888) Test case ShowCommandTest is failing after SQOOP-2848
[ https://issues.apache.org/jira/browse/SQOOP-2888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho updated SQOOP-2888: -- Attachment: SQOOP-2888.patch > Test case ShowCommandTest is failing after SQOOP-2848 > - > > Key: SQOOP-2888 > URL: https://issues.apache.org/jira/browse/SQOOP-2888 > Project: Sqoop > Issue Type: Bug > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > Attachments: SQOOP-2888.patch > > > The problem is that our client calls {{/v1/job?cname=generic-jdbc-connector}} > when the change in SQOOP-2848 made this invalid and only > {{/v1/job/all?cname=generic-jdbc-connector}} is allowed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (SQOOP-2888) Test case ShowCommandTest is failing after SQOOP-2848
[ https://issues.apache.org/jira/browse/SQOOP-2888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho updated SQOOP-2888: -- Description: The problem is that our client calls {{/v1/job?cname=generic-jdbc-connector}} when the change in SQOOP-2848 made this invalid and only {{/v1/job/all?cname=generic-jdbc-connector}} is allowed. (was: The problem is that our client calls {{/v1/job?cname=generic-jdbc-connector}} when the change in SQOOP-2848 made this invalid and only{{/v1/job/all?cname=generic-jdbc-connector}} is allowed.) > Test case ShowCommandTest is failing after SQOOP-2848 > - > > Key: SQOOP-2888 > URL: https://issues.apache.org/jira/browse/SQOOP-2888 > Project: Sqoop > Issue Type: Bug > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > Attachments: SQOOP-2888.patch > > > The problem is that our client calls {{/v1/job?cname=generic-jdbc-connector}} > when the change in SQOOP-2848 made this invalid and only > {{/v1/job/all?cname=generic-jdbc-connector}} is allowed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2857) Sqoop2: toParquetTest fails frequently on the pre-commit hook
[ https://issues.apache.org/jira/browse/SQOOP-2857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201963#comment-15201963 ] Jarek Jarcec Cecho commented on SQOOP-2857: --- I've kicked off precommit build for this one again. > Sqoop2: toParquetTest fails frequently on the pre-commit hook > - > > Key: SQOOP-2857 > URL: https://issues.apache.org/jira/browse/SQOOP-2857 > Project: Sqoop > Issue Type: Bug >Affects Versions: 1.99.6 >Reporter: Abraham Fine >Assignee: Abraham Fine > Attachments: SQOOP-2857.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2666) Write precision and scale information into Avro schema
[ https://issues.apache.org/jira/browse/SQOOP-2666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191223#comment-15191223 ] Jarek Jarcec Cecho commented on SQOOP-2666: --- Thanks for the ping [~proftodd], sorry that I've missed your previous message. Now that SQOOP-1493 has been committed, for decimal columns we will automatically create {{scale}} and {{precision}} fields as is requested by [Avro schema|http://avro.apache.org/docs/1.8.0/spec.html#Decimal] definition. Hence I'm wondering what is the value to also append {{sqlScale}} and {{sqlPrecision}} fields? Shouldn't the users be just able to use {{scale}}/{{precision}} instead? > Write precision and scale information into Avro schema > -- > > Key: SQOOP-2666 > URL: https://issues.apache.org/jira/browse/SQOOP-2666 > Project: Sqoop > Issue Type: Improvement > Components: codegen, test, tools >Affects Versions: 1.4.7 >Reporter: John Todd >Priority: Minor > Labels: newbie, patch, patch-available, test > Fix For: 1.4.6 > > Attachments: SQOOP-2666.patch > > > If present, write SQL precision and scale values from DECIMAL and NUMERIC > column types into generated Avro schema. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (SQOOP-2666) Write precision and scale information into Avro schema
[ https://issues.apache.org/jira/browse/SQOOP-2666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho reopened SQOOP-2666: --- Since we're still discussing this patch, I'll reopen the JIRA. > Write precision and scale information into Avro schema > -- > > Key: SQOOP-2666 > URL: https://issues.apache.org/jira/browse/SQOOP-2666 > Project: Sqoop > Issue Type: Improvement > Components: codegen, test, tools >Affects Versions: 1.4.7 >Reporter: John Todd >Priority: Minor > Labels: newbie, patch, patch-available, test > Fix For: 1.4.6 > > Attachments: SQOOP-2666.patch > > > If present, write SQL precision and scale values from DECIMAL and NUMERIC > column types into generated Avro schema. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2561) Special Character removal from Column name as avro data results in duplicate column and fails the import
[ https://issues.apache.org/jira/browse/SQOOP-2561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189399#comment-15189399 ] Jarek Jarcec Cecho commented on SQOOP-2561: --- I'm not concerned about fiels that start with underscore [~vishnusn]. But I do believe that if a table would have two columns - {{first~column}} and {{first_column}} - then we would have a duplicates. Would you agree? > Special Character removal from Column name as avro data results in duplicate > column and fails the import > > > Key: SQOOP-2561 > URL: https://issues.apache.org/jira/browse/SQOOP-2561 > Project: Sqoop > Issue Type: Bug >Affects Versions: 1.4.6 > Environment: cdh5.3.2 >Reporter: Suresh >Assignee: VISHNU S NAIR > Labels: AVRO, SQOOP > Fix For: 1.4.7 > > Attachments: 0001-SQOOP-2561.patch > > > When a Special character like '$' or '#' are present in column name, > sqoop/avro removes those special character. In some cases it leads to > duplicate column. > e.g. If we have COL$1 and COL1$ in the schema, it removes both of them and > creates the duplicate column as COL1 and it results in failure of the SQOOP > import job as a avro data. The same table can be loaded without > --as-avarodata flag. > The similar issue is raised in, > https://issues.apache.org/jira/browse/SQOOP-1361 - which i suppose is fixed > and the fix is creating this new issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (SQOOP-2826) Sqoop2: Doc: Auto-generate connector pages
[ https://issues.apache.org/jira/browse/SQOOP-2826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho updated SQOOP-2826: -- Attachment: SQOOP-2826.patch Uploading new version of the patch - still not ready to be reviewed as it's WIP that depends on half a dozen patches that are currently under review. > Sqoop2: Doc: Auto-generate connector pages > -- > > Key: SQOOP-2826 > URL: https://issues.apache.org/jira/browse/SQOOP-2826 > Project: Sqoop > Issue Type: Sub-task > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > Attachments: SQOOP-2826.patch, SQOOP-2826.patch > > > Our current pages describing connectors are heavily outdated. This happened > because we've added bunch of new configuration properties in various patches, > but forget to update the docs. Whereas we could force every connector > changing patch to also update the docs, I think that keeping the same > information on two places is tedious. Much better option would be to > automatically generate connector documentation pages from our code. > To be very specific, I can see at least two areas where auto generating > content would really help: > * [Generating input list for > connectors|http://sqoop.apache.org/docs/1.99.6/Connectors.html] > * [Generating command line > parameters|http://sqoop.apache.org/docs/1.99.6/CommandLineClient.html] > I'm sure that there will be others :) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2495) Sqoop2: Provide simple test that can validate if connector is reasonably formed
[ https://issues.apache.org/jira/browse/SQOOP-2495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188124#comment-15188124 ] Jarek Jarcec Cecho commented on SQOOP-2495: --- The attached patch indeed depends on all the JIRAs that are in "Depends upon" section, hence committing it before them doesn't make much sense. > Sqoop2: Provide simple test that can validate if connector is reasonably > formed > --- > > Key: SQOOP-2495 > URL: https://issues.apache.org/jira/browse/SQOOP-2495 > Project: Sqoop > Issue Type: Bug >Affects Versions: 1.99.6 >Reporter: Jarek Jarcec Cecho >Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > Attachments: SQOOP-2495.patch > > > On internal hackathon we we're hacking Sqoop 2 connector with [~singhashish] > and we went through few troubles that we should address. > We have a lot of requirements for Sqoop connectors that are only documented > but not enforced by the code, for example: > * Resource bundles need to have names for all properties in configuration > objects > * Configuration objects needs to be properly annotated > If either of those is incorrect then we happily load the connector just to > throw some random exceptions during runtime. We should provide simple test > case that all connectors can reuse to validate that connector is properly > formed as Sqoop expects. > (Which still doesn't mean that the connector will work as we can't guarantee > that extractor/loader is properly implemented. But we can at least help > people to not see random exceptions such as those described in SQOOP-2494). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (SQOOP-2495) Sqoop2: Provide simple test that can validate if connector is reasonably formed
[ https://issues.apache.org/jira/browse/SQOOP-2495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho updated SQOOP-2495: -- Attachment: SQOOP-2495.patch > Sqoop2: Provide simple test that can validate if connector is reasonably > formed > --- > > Key: SQOOP-2495 > URL: https://issues.apache.org/jira/browse/SQOOP-2495 > Project: Sqoop > Issue Type: Bug >Affects Versions: 1.99.6 > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > Attachments: SQOOP-2495.patch > > > On internal hackathon we we're hacking Sqoop 2 connector with [~singhashish] > and we went through few troubles that we should address. > We have a lot of requirements for Sqoop connectors that are only documented > but not enforced by the code, for example: > * Resource bundles need to have names for all properties in configuration > objects > * Configuration objects needs to be properly annotated > If either of those is incorrect then we happily load the connector just to > throw some random exceptions during runtime. We should provide simple test > case that all connectors can reuse to validate that connector is properly > formed as Sqoop expects. > (Which still doesn't mean that the connector will work as we can't guarantee > that extractor/loader is properly implemented. But we can at least help > people to not see random exceptions such as those described in SQOOP-2494). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (SQOOP-2495) Sqoop2: Provide simple test that can validate if connector is reasonably formed
[ https://issues.apache.org/jira/browse/SQOOP-2495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho reassigned SQOOP-2495: - Assignee: Jarek Jarcec Cecho I'll take this one up. I've recently started changing the resource bundles and I found that we have no easily reusable test that I can run for all different connectors which is really bummer. Rather then doing some sort of short term solution, I'll provide patch for this one. > Sqoop2: Provide simple test that can validate if connector is reasonably > formed > --- > > Key: SQOOP-2495 > URL: https://issues.apache.org/jira/browse/SQOOP-2495 > Project: Sqoop > Issue Type: Bug >Affects Versions: 1.99.6 > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > > On internal hackathon we we're hacking Sqoop 2 connector with [~singhashish] > and we went through few troubles that we should address. > We have a lot of requirements for Sqoop connectors that are only documented > but not enforced by the code, for example: > * Resource bundles need to have names for all properties in configuration > objects > * Configuration objects needs to be properly annotated > If either of those is incorrect then we happily load the connector just to > throw some random exceptions during runtime. We should provide simple test > case that all connectors can reuse to validate that connector is properly > formed as Sqoop expects. > (Which still doesn't mean that the connector will work as we can't guarantee > that extractor/loader is properly implemented. But we can at least help > people to not see random exceptions such as those described in SQOOP-2494). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (SQOOP-2548) Sqoop2: RESTiliency: Enforce strict connector names
[ https://issues.apache.org/jira/browse/SQOOP-2548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho resolved SQOOP-2548. --- Resolution: Won't Fix As we've migrated from IDs to names only, this is no longer relevant concern. > Sqoop2: RESTiliency: Enforce strict connector names > --- > > Key: SQOOP-2548 > URL: https://issues.apache.org/jira/browse/SQOOP-2548 > Project: Sqoop > Issue Type: Sub-task > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > > We're using the connector name in URL as it is and we're reusing the same URL > to also retrieve connector by ID. Whereas our example connectors are written > in a way to not cause any troubles, I think that we should put down code > enforcing that the connectors are named in a way that won't cause any > problems down the road. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (SQOOP-2883) Sqoop2: Update model classes to represent new constant for connector resource bundles
Jarek Jarcec Cecho created SQOOP-2883: - Summary: Sqoop2: Update model classes to represent new constant for connector resource bundles Key: SQOOP-2883 URL: https://issues.apache.org/jira/browse/SQOOP-2883 Project: Sqoop Issue Type: Sub-task Reporter: Jarek Jarcec Cecho Assignee: Jarek Jarcec Cecho Fix For: 1.99.7 Attachments: SQOOP-2883.patch As part of the parent umbrella JIRA, I've made few changes to the resource bundles that connectors are providing: # Added a new key for example that we should add to [{{MNamedElement}}|https://github.com/apache/sqoop/blob/sqoop2/common/src/main/java/org/apache/sqoop/model/MNamedElement.java] # Added a new key {{connector.name}} with human readable connector name. The constant should be referred somewhere in the code. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (SQOOP-2883) Sqoop2: Update model classes to represent new constant for connector resource bundles
[ https://issues.apache.org/jira/browse/SQOOP-2883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho updated SQOOP-2883: -- Attachment: SQOOP-2883.patch > Sqoop2: Update model classes to represent new constant for connector resource > bundles > - > > Key: SQOOP-2883 > URL: https://issues.apache.org/jira/browse/SQOOP-2883 > Project: Sqoop > Issue Type: Sub-task > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > Attachments: SQOOP-2883.patch > > > As part of the parent umbrella JIRA, I've made few changes to the resource > bundles that connectors are providing: > # Added a new key for example that we should add to > [{{MNamedElement}}|https://github.com/apache/sqoop/blob/sqoop2/common/src/main/java/org/apache/sqoop/model/MNamedElement.java] > # Added a new key {{connector.name}} with human readable connector name. The > constant should be referred somewhere in the code. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (SQOOP-2882) Sqoop2: Enrich SFTP Connector resource file
[ https://issues.apache.org/jira/browse/SQOOP-2882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho updated SQOOP-2882: -- Attachment: SQOOP-2882.patch > Sqoop2: Enrich SFTP Connector resource file > --- > > Key: SQOOP-2882 > URL: https://issues.apache.org/jira/browse/SQOOP-2882 > Project: Sqoop > Issue Type: Sub-task > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > Attachments: SQOOP-2882.patch > > > Please see parent JIRA for details. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (SQOOP-2882) Sqoop2: Enrich SFTP Connector resource file
Jarek Jarcec Cecho created SQOOP-2882: - Summary: Sqoop2: Enrich SFTP Connector resource file Key: SQOOP-2882 URL: https://issues.apache.org/jira/browse/SQOOP-2882 Project: Sqoop Issue Type: Sub-task Reporter: Jarek Jarcec Cecho Assignee: Jarek Jarcec Cecho Fix For: 1.99.7 Attachments: SQOOP-2882.patch Please see parent JIRA for details. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (SQOOP-2561) Special Character removal from Column name as avro data results in duplicate column and fails the import
[ https://issues.apache.org/jira/browse/SQOOP-2561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15187546#comment-15187546 ] Jarek Jarcec Cecho edited comment on SQOOP-2561 at 3/9/16 6:08 PM: --- Thanks for the reminder [~vishnusn]. I think that this solution wont't solve all the cases. For example if the table have columns {{first~column}} and {{first_column}}, then we again create duplicates. I've was looking into the [ClassWriter|https://github.com/apache/sqoop/blob/trunk/src/java/org/apache/sqoop/orm/ClassWriter.java] and it seems to me that it will hit the same problem, so I guess that we can leave it be for now and create follow up JIRA to solve that problem. What do you think [~vishnusn]? was (Author: jarcec): Thanks for the reminder [~vishnusn]. I think that this solution wont't solve all the cases. For example if the table have columns {{first~column]] and {{first_column}}, then we again create duplicates. I've was looking into the [ClassWriter|https://github.com/apache/sqoop/blob/trunk/src/java/org/apache/sqoop/orm/ClassWriter.java] and it seems to me that it will hit the same problem, so I guess that we can leave it be for now and create follow up JIRA to solve that problem. What do you think [~vishnusn]? > Special Character removal from Column name as avro data results in duplicate > column and fails the import > > > Key: SQOOP-2561 > URL: https://issues.apache.org/jira/browse/SQOOP-2561 > Project: Sqoop > Issue Type: Bug >Affects Versions: 1.4.6 > Environment: cdh5.3.2 >Reporter: Suresh >Assignee: VISHNU S NAIR > Labels: AVRO, SQOOP > Fix For: 1.4.7 > > Attachments: 0001-SQOOP-2561.patch > > > When a Special character like '$' or '#' are present in column name, > sqoop/avro removes those special character. In some cases it leads to > duplicate column. > e.g. If we have COL$1 and COL1$ in the schema, it removes both of them and > creates the duplicate column as COL1 and it results in failure of the SQOOP > import job as a avro data. The same table can be loaded without > --as-avarodata flag. > The similar issue is raised in, > https://issues.apache.org/jira/browse/SQOOP-1361 - which i suppose is fixed > and the fix is creating this new issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2561) Special Character removal from Column name as avro data results in duplicate column and fails the import
[ https://issues.apache.org/jira/browse/SQOOP-2561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15187546#comment-15187546 ] Jarek Jarcec Cecho commented on SQOOP-2561: --- Thanks for the reminder [~vishnusn]. I think that this solution wont't solve all the cases. For example if the table have columns {{first~column]] and {{first_column}}, then we again create duplicates. I've was looking into the [ClassWriter|https://github.com/apache/sqoop/blob/trunk/src/java/org/apache/sqoop/orm/ClassWriter.java] and it seems to me that it will hit the same problem, so I guess that we can leave it be for now and create follow up JIRA to solve that problem. What do you think [~vishnusn]? > Special Character removal from Column name as avro data results in duplicate > column and fails the import > > > Key: SQOOP-2561 > URL: https://issues.apache.org/jira/browse/SQOOP-2561 > Project: Sqoop > Issue Type: Bug >Affects Versions: 1.4.6 > Environment: cdh5.3.2 >Reporter: Suresh >Assignee: VISHNU S NAIR > Labels: AVRO, SQOOP > Fix For: 1.4.7 > > Attachments: 0001-SQOOP-2561.patch > > > When a Special character like '$' or '#' are present in column name, > sqoop/avro removes those special character. In some cases it leads to > duplicate column. > e.g. If we have COL$1 and COL1$ in the schema, it removes both of them and > creates the duplicate column as COL1 and it results in failure of the SQOOP > import job as a avro data. The same table can be loaded without > --as-avarodata flag. > The similar issue is raised in, > https://issues.apache.org/jira/browse/SQOOP-1361 - which i suppose is fixed > and the fix is creating this new issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (SQOOP-2881) Sqoop2: EnrichOraOop Connector resource file
Jarek Jarcec Cecho created SQOOP-2881: - Summary: Sqoop2: EnrichOraOop Connector resource file Key: SQOOP-2881 URL: https://issues.apache.org/jira/browse/SQOOP-2881 Project: Sqoop Issue Type: Sub-task Reporter: Jarek Jarcec Cecho Assignee: Jarek Jarcec Cecho Fix For: 1.99.7 Attachments: SQOOP-2881.patch Please see parent JIRA for details. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (SQOOP-2881) Sqoop2: EnrichOraOop Connector resource file
[ https://issues.apache.org/jira/browse/SQOOP-2881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho updated SQOOP-2881: -- Attachment: SQOOP-2881.patch > Sqoop2: EnrichOraOop Connector resource file > > > Key: SQOOP-2881 > URL: https://issues.apache.org/jira/browse/SQOOP-2881 > Project: Sqoop > Issue Type: Sub-task > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > Attachments: SQOOP-2881.patch > > > Please see parent JIRA for details. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (SQOOP-2880) Provide argument for overriding temporary directory
[ https://issues.apache.org/jira/browse/SQOOP-2880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho reassigned SQOOP-2880: - Assignee: Attila Szabo (was: Jarek Jarcec Cecho) Sure, be my guest [~maugli]! > Provide argument for overriding temporary directory > --- > > Key: SQOOP-2880 > URL: https://issues.apache.org/jira/browse/SQOOP-2880 > Project: Sqoop > Issue Type: Bug >Affects Versions: 1.4.6 > Reporter: Jarek Jarcec Cecho >Assignee: Attila Szabo > > Several types of imports (incremental, hive, ...) might require import into > temporary directory first. We're currently putting the temporary directory > into > [{{_sqoop}}|https://github.com/apache/sqoop/blob/trunk/src/java/org/apache/sqoop/util/AppendUtils.java#L42] > and we're allowing user to override it with {{sqoop.test.import.rootDir}}. > However this property is not documented anywhere and I found people confused > about it as it contains the string {{test}} which is really confusing. The > most concerning thing is that as the properties are not persisted in > metastore, this option won't work for stored jobs at all. Hence I would like > to propose a first class argument to handle the temporary directory (e.g. > something like {{--temporary-directory}}). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (SQOOP-2880) Provide argument for overriding temporary directory
Jarek Jarcec Cecho created SQOOP-2880: - Summary: Provide argument for overriding temporary directory Key: SQOOP-2880 URL: https://issues.apache.org/jira/browse/SQOOP-2880 Project: Sqoop Issue Type: Bug Affects Versions: 1.4.6 Reporter: Jarek Jarcec Cecho Assignee: Jarek Jarcec Cecho Several types of imports (incremental, hive, ...) might require import into temporary directory first. We're currently putting the temporary directory into [{{_sqoop}}|https://github.com/apache/sqoop/blob/trunk/src/java/org/apache/sqoop/util/AppendUtils.java#L42] and we're allowing user to override it with {{sqoop.test.import.rootDir}}. However this property is not documented anywhere and I found people confused about it as it contains the string {{test}} which is really confusing. The most concerning thing is that as the properties are not persisted in metastore, this option won't work for stored jobs at all. Hence I would like to propose a first class argument to handle the temporary directory (e.g. something like {{--temporary-directory}}). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2844) Sqoop2: TrustStore support for shell
[ https://issues.apache.org/jira/browse/SQOOP-2844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15185742#comment-15185742 ] Jarek Jarcec Cecho commented on SQOOP-2844: --- It's a shell specific security configuration which is in general hard to test, so I'm fine with the missing test right now. > Sqoop2: TrustStore support for shell > > > Key: SQOOP-2844 > URL: https://issues.apache.org/jira/browse/SQOOP-2844 > Project: Sqoop > Issue Type: Sub-task >Affects Versions: 1.99.6 >Reporter: Abraham Fine >Assignee: Abraham Fine > Attachments: SQOOP-2844.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2844) Sqoop2: TrustStore support for shell
[ https://issues.apache.org/jira/browse/SQOOP-2844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15185120#comment-15185120 ] Jarek Jarcec Cecho commented on SQOOP-2844: --- I've restarted the hook to see if some of the test failures are flaky. > Sqoop2: TrustStore support for shell > > > Key: SQOOP-2844 > URL: https://issues.apache.org/jira/browse/SQOOP-2844 > Project: Sqoop > Issue Type: Sub-task >Affects Versions: 1.99.6 >Reporter: Abraham Fine >Assignee: Abraham Fine > Attachments: SQOOP-2844.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (SQOOP-2877) Sqoop2: Enrich Kite Connector resource file
[ https://issues.apache.org/jira/browse/SQOOP-2877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho updated SQOOP-2877: -- Attachment: SQOOP-2877.patch > Sqoop2: Enrich Kite Connector resource file > --- > > Key: SQOOP-2877 > URL: https://issues.apache.org/jira/browse/SQOOP-2877 > Project: Sqoop > Issue Type: Sub-task > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > Attachments: SQOOP-2877.patch > > > Please see parent JIRA for details. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (SQOOP-2877) Sqoop2: Enrich Kite Connector resource file
Jarek Jarcec Cecho created SQOOP-2877: - Summary: Sqoop2: Enrich Kite Connector resource file Key: SQOOP-2877 URL: https://issues.apache.org/jira/browse/SQOOP-2877 Project: Sqoop Issue Type: Sub-task Reporter: Jarek Jarcec Cecho Assignee: Jarek Jarcec Cecho Fix For: 1.99.7 Please see parent JIRA for details. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (SQOOP-2834) Sqoop2: Integration: Limit debug log to only classes that we're interested in
[ https://issues.apache.org/jira/browse/SQOOP-2834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho resolved SQOOP-2834. --- Resolution: Fixed Resolved as part of SQOOP-2832. > Sqoop2: Integration: Limit debug log to only classes that we're interested in > - > > Key: SQOOP-2834 > URL: https://issues.apache.org/jira/browse/SQOOP-2834 > Project: Sqoop > Issue Type: Bug > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > > To follow up on my recent patches that are helping with "debuggability" of > pre-commit hook and our integration tests in general, I've looked into what > exactly are we logging that size of our logs is more then 1GB per execution. > Here is what I've done: > # I've applied my patch from SQOOP-2832 to get log for one test only > # I've run a magic that gives me classes that are responsible for logging: > {code} > cat > test/target/surefire-reports/0_org.apache.sqoop.integration.connector.hdfs.AppendModeTest.test.txt > | sed -re "s/^.*\] ([A-Z]+)[ ]+([A-Za-z.]+) .*$/\1 \2/" | sort | uniq -c | > sort -r > report > {code} > With a top results being: > {code} > 6927 DEBUG org.apache.sqoop.repository.JdbcRepositoryTransaction > 5783 DEBUG org.apache.hadoop.ipc.Client > 5752 DEBUG org.apache.sqoop.repository.common.CommonRepositoryHandler > 5750 DEBUG org.apache.hadoop.hdfs.DFSClient > 4784 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode > 4715 DEBUG org.eclipse.jetty.io.SelectorManager > 4660 DEBUG org.eclipse.jetty.server.HttpConnection > 4306 DEBUG org.apache.hadoop.security.UserGroupInformation > 3489 DEBUG org.eclipse.jetty.io.WriteFlusher > 2927 DEBUG org.eclipse.jetty.io.ChannelEndPoint > 2846 DEBUG org.apache.hadoop.conf.Configuration > 2830 DEBUG org.apache.hadoop.ipc.ProtobufRpcEngine > 2357 DEBUG org.eclipse.jetty.io.AbstractConnection > 2350 DEBUG org.eclipse.jetty.io.SelectChannelEndPoint > 2343 DEBUG org.eclipse.jetty.server.HttpChannel > 2332 DEBUG org.eclipse.jetty.servlet.ServletHandler > 2309 INFO org.apache.sqoop.repository.JdbcRepositoryTransaction > 16701 DEBUG org.apache.hadoop.security.SaslInputStream > 14613 DEBUG org.eclipse.jetty.http.HttpParser > 1426 > 1175 DEBUG > org.apache.sqoop.security.authorization.DefaultAuthorizationValidator > 1168 DEBUG org.eclipse.jetty.server.handler.ContextHandler > 1168 DEBUG org.eclipse.jetty.server.Server > 1168 DEBUG org.eclipse.jetty.server.HttpChannelState > 1034 DEBUG org.apache.hadoop.yarn.server.security.ApplicationACLsManager > 10329 DEBUG org.apache.hadoop.ipc.Server > {code} > Based on that I would like to reconfigure certain classes to limit their > logging to levels higher then {{DEBUG}} - jetty seems as a no-brainer and > Hadoop ipc might be another good candidate. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2875) Sqoop2: Integration tests should on include a request body with a delete request
[ https://issues.apache.org/jira/browse/SQOOP-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183973#comment-15183973 ] Jarek Jarcec Cecho commented on SQOOP-2875: --- There was a small race condition - the [precommit hook started before the patch was attached|https://builds.apache.org/job/PreCommit-SQOOP-Build/2298/console] which failed. This is a test only change and I've run the affected test locally and it passed, hence +1. > Sqoop2: Integration tests should on include a request body with a delete > request > > > Key: SQOOP-2875 > URL: https://issues.apache.org/jira/browse/SQOOP-2875 > Project: Sqoop > Issue Type: Bug >Affects Versions: 1.99.6 >Reporter: Abraham Fine >Assignee: Abraham Fine > Attachments: SQOOP-2857.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (SQOOP-2870) Sqoop2: RESTiliency: Add tests for DriverHandler
[ https://issues.apache.org/jira/browse/SQOOP-2870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho updated SQOOP-2870: -- Attachment: SQOOP-2870.patch > Sqoop2: RESTiliency: Add tests for DriverHandler > > > Key: SQOOP-2870 > URL: https://issues.apache.org/jira/browse/SQOOP-2870 > Project: Sqoop > Issue Type: Sub-task > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > Attachments: SQOOP-2870.patch, SQOOP-2870.patch > > > I would like to addd small test case covering {{DriverHandler}} module > similarly as we're building for other REST endpoints. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (SQOOP-2855) Sqoop2: Enrich Generic JDBC Connector resource file
[ https://issues.apache.org/jira/browse/SQOOP-2855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho updated SQOOP-2855: -- Attachment: SQOOP-2855.patch > Sqoop2: Enrich Generic JDBC Connector resource file > --- > > Key: SQOOP-2855 > URL: https://issues.apache.org/jira/browse/SQOOP-2855 > Project: Sqoop > Issue Type: Sub-task > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > Attachments: SQOOP-2855.patch, SQOOP-2855.patch > > > See parent JIRA for details of what is being done here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: SQOOP-2873 - Support for multiple columns in incremetal import.
Hi Sudeep, I’m excited to see you taking on SQOOP-2873. Please do not hesitate and submit a patch, we’ll be happy to review it! Jarcec > On Mar 7, 2016, at 2:59 AM, Sudeep Jaiswalwrote: > > Hi, > > We have implemented the support for multiple columns in incremental import. > I will be happy to know you comments on the same. > > Thanks, > Sudeep
[jira] [Commented] (SQOOP-1916) Sqoop2: Yarn child leaking in integration tests
[ https://issues.apache.org/jira/browse/SQOOP-1916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183314#comment-15183314 ] Jarek Jarcec Cecho commented on SQOOP-1916: --- It's interesting that you see the leaking children on real cluster as well [~skuehn]. We've observed them only in the integration tests that are running on mini clusters. Have you by any chance tried the latest trunk as well? The trunk has significantly moved from last released version, so I'm wondering if you can still observe the behavior. > Sqoop2: Yarn child leaking in integration tests > --- > > Key: SQOOP-1916 > URL: https://issues.apache.org/jira/browse/SQOOP-1916 > Project: Sqoop > Issue Type: Bug > Components: sqoop2-test >Reporter: Abraham Elmahrek >Assignee: Scott Kuehn > Fix For: 2.0.0 > > Attachments: SQOOP-1916.patch > > > It seems like the yarn child processes are leaking. > {noformat:title=jstack} > jstack 57054 > 2014-12-16 21:39:06 > Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed mode): > "Attach Listener" daemon prio=5 tid=0x7f90ebd57000 nid=0x610f > waiting on condition [0x] >java.lang.Thread.State: RUNNABLE > "DestroyJavaVM" prio=5 tid=0x7f90ebd42800 nid=0x1903 waiting on > condition [0x] >java.lang.Thread.State: RUNNABLE > "Abandoned connection cleanup thread" daemon prio=5 > tid=0x7f90ebce8800 nid=0x7103 in Object.wait() > [0x0001129c9000] >java.lang.Thread.State: TIMED_WAITING (on object monitor) > at java.lang.Object.wait(Native Method) > - waiting on <0x0007fd3b9668> (a java.lang.ref.ReferenceQueue$Lock) > at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135) > - locked <0x0007fd3b9668> (a java.lang.ref.ReferenceQueue$Lock) > at > com.mysql.jdbc.AbandonedConnectionCleanupThread.run(AbandonedConnectionCleanupThread.java:41) > "OutputFormatLoader-consumer" prio=5 tid=0x7f90eb9b5800 nid=0x6d03 > waiting on condition [0x0001127c3000] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x0007fc9e5708> (a > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) > at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) > at > java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > "org.apache.hadoop.hdfs.PeerCache@2582a699" daemon prio=5 > tid=0x7f90eca2 nid=0x6b03 waiting on condition > [0x0001126c] >java.lang.Thread.State: TIMED_WAITING (sleeping) > at java.lang.Thread.sleep(Native Method) > at org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:244) > at org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:41) > at org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:119) > at java.lang.Thread.run(Thread.java:745) > "Service Thread" daemon prio=5 tid=0x7f90ec81f800 nid=0x5303 > runnable [0x] >java.lang.Thread.State: RUNNABLE > "C2 CompilerThread1" daemon prio=5 tid=0x7f90ef001000 nid=0x5103 > waiting on condition [0x] >java.lang.Thread.State: RUNNABLE > "C2 CompilerThread0" daemon prio=5 tid=0x7f90ed819000 nid=0x4f03 > waiting on condition [0x] >java.lang.Thread.State: RUNNABLE > "Signal Dispatcher" daemon prio=5 tid=0x7f90ed818000 nid=0x4d03 > runnable [0x] >java.lang.Thread.State: RUNNABLE > "Finalizer" daemon prio=5 tid=0x7f90ec821800 nid=0x3903 in > Object.wait() [0x000110ba2000] >java.lang.Thread.State: WAITING (on object monitor) > at java.lang.Object.wait(Native Method) > - waiting on <0x0007ff842558> (a java.lang.ref.ReferenceQueue$Lock) > at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135) > - locked <0x0007ff842558> (a java.lang.ref.ReferenceQueue$Lock) > at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151) > at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:209) > "Reference Ha
[jira] [Comment Edited] (SQOOP-1916) Sqoop2: Yarn child leaking in integration tests
[ https://issues.apache.org/jira/browse/SQOOP-1916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183310#comment-15183310 ] Jarek Jarcec Cecho edited comment on SQOOP-1916 at 3/7/16 5:29 PM: --- Assigning to you [~skuehn]. was (Author: jarcec): Assigning to you [~ skuehn]. > Sqoop2: Yarn child leaking in integration tests > --- > > Key: SQOOP-1916 > URL: https://issues.apache.org/jira/browse/SQOOP-1916 > Project: Sqoop > Issue Type: Bug > Components: sqoop2-test >Reporter: Abraham Elmahrek >Assignee: Scott Kuehn > Fix For: 2.0.0 > > Attachments: SQOOP-1916.patch > > > It seems like the yarn child processes are leaking. > {noformat:title=jstack} > jstack 57054 > 2014-12-16 21:39:06 > Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed mode): > "Attach Listener" daemon prio=5 tid=0x7f90ebd57000 nid=0x610f > waiting on condition [0x] >java.lang.Thread.State: RUNNABLE > "DestroyJavaVM" prio=5 tid=0x7f90ebd42800 nid=0x1903 waiting on > condition [0x] >java.lang.Thread.State: RUNNABLE > "Abandoned connection cleanup thread" daemon prio=5 > tid=0x7f90ebce8800 nid=0x7103 in Object.wait() > [0x0001129c9000] >java.lang.Thread.State: TIMED_WAITING (on object monitor) > at java.lang.Object.wait(Native Method) > - waiting on <0x0007fd3b9668> (a java.lang.ref.ReferenceQueue$Lock) > at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135) > - locked <0x0007fd3b9668> (a java.lang.ref.ReferenceQueue$Lock) > at > com.mysql.jdbc.AbandonedConnectionCleanupThread.run(AbandonedConnectionCleanupThread.java:41) > "OutputFormatLoader-consumer" prio=5 tid=0x7f90eb9b5800 nid=0x6d03 > waiting on condition [0x0001127c3000] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x0007fc9e5708> (a > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) > at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) > at > java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > "org.apache.hadoop.hdfs.PeerCache@2582a699" daemon prio=5 > tid=0x7f90eca2 nid=0x6b03 waiting on condition > [0x0001126c] >java.lang.Thread.State: TIMED_WAITING (sleeping) > at java.lang.Thread.sleep(Native Method) > at org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:244) > at org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:41) > at org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:119) > at java.lang.Thread.run(Thread.java:745) > "Service Thread" daemon prio=5 tid=0x7f90ec81f800 nid=0x5303 > runnable [0x] >java.lang.Thread.State: RUNNABLE > "C2 CompilerThread1" daemon prio=5 tid=0x7f90ef001000 nid=0x5103 > waiting on condition [0x] >java.lang.Thread.State: RUNNABLE > "C2 CompilerThread0" daemon prio=5 tid=0x7f90ed819000 nid=0x4f03 > waiting on condition [0x] >java.lang.Thread.State: RUNNABLE > "Signal Dispatcher" daemon prio=5 tid=0x7f90ed818000 nid=0x4d03 > runnable [0x] >java.lang.Thread.State: RUNNABLE > "Finalizer" daemon prio=5 tid=0x7f90ec821800 nid=0x3903 in > Object.wait() [0x000110ba2000] >java.lang.Thread.State: WAITING (on object monitor) > at java.lang.Object.wait(Native Method) > - waiting on <0x0007ff842558> (a java.lang.ref.ReferenceQueue$Lock) > at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135) > - locked <0x0007ff842558> (a java.lang.ref.ReferenceQueue$Lock) > at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151) > at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:209) > "Reference Handler" daemon prio=5 tid=0x7f90eb811800 nid=0x3703 in > Object.wait() [0x000110a9f000] >java.lang.Thread.State: WAITING (on object monitor) > at java.lang.Object.wait(Native Method) > - waiting o
[jira] [Assigned] (SQOOP-1916) Sqoop2: Yarn child leaking in integration tests
[ https://issues.apache.org/jira/browse/SQOOP-1916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho reassigned SQOOP-1916: - Assignee: Scott Kuehn > Sqoop2: Yarn child leaking in integration tests > --- > > Key: SQOOP-1916 > URL: https://issues.apache.org/jira/browse/SQOOP-1916 > Project: Sqoop > Issue Type: Bug > Components: sqoop2-test >Reporter: Abraham Elmahrek >Assignee: Scott Kuehn > Fix For: 2.0.0 > > Attachments: SQOOP-1916.patch > > > It seems like the yarn child processes are leaking. > {noformat:title=jstack} > jstack 57054 > 2014-12-16 21:39:06 > Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed mode): > "Attach Listener" daemon prio=5 tid=0x7f90ebd57000 nid=0x610f > waiting on condition [0x] >java.lang.Thread.State: RUNNABLE > "DestroyJavaVM" prio=5 tid=0x7f90ebd42800 nid=0x1903 waiting on > condition [0x] >java.lang.Thread.State: RUNNABLE > "Abandoned connection cleanup thread" daemon prio=5 > tid=0x7f90ebce8800 nid=0x7103 in Object.wait() > [0x0001129c9000] >java.lang.Thread.State: TIMED_WAITING (on object monitor) > at java.lang.Object.wait(Native Method) > - waiting on <0x0007fd3b9668> (a java.lang.ref.ReferenceQueue$Lock) > at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135) > - locked <0x0007fd3b9668> (a java.lang.ref.ReferenceQueue$Lock) > at > com.mysql.jdbc.AbandonedConnectionCleanupThread.run(AbandonedConnectionCleanupThread.java:41) > "OutputFormatLoader-consumer" prio=5 tid=0x7f90eb9b5800 nid=0x6d03 > waiting on condition [0x0001127c3000] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x0007fc9e5708> (a > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) > at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) > at > java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > "org.apache.hadoop.hdfs.PeerCache@2582a699" daemon prio=5 > tid=0x7f90eca2 nid=0x6b03 waiting on condition > [0x0001126c] >java.lang.Thread.State: TIMED_WAITING (sleeping) > at java.lang.Thread.sleep(Native Method) > at org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:244) > at org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:41) > at org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:119) > at java.lang.Thread.run(Thread.java:745) > "Service Thread" daemon prio=5 tid=0x7f90ec81f800 nid=0x5303 > runnable [0x] >java.lang.Thread.State: RUNNABLE > "C2 CompilerThread1" daemon prio=5 tid=0x7f90ef001000 nid=0x5103 > waiting on condition [0x] >java.lang.Thread.State: RUNNABLE > "C2 CompilerThread0" daemon prio=5 tid=0x7f90ed819000 nid=0x4f03 > waiting on condition [0x] >java.lang.Thread.State: RUNNABLE > "Signal Dispatcher" daemon prio=5 tid=0x7f90ed818000 nid=0x4d03 > runnable [0x] >java.lang.Thread.State: RUNNABLE > "Finalizer" daemon prio=5 tid=0x7f90ec821800 nid=0x3903 in > Object.wait() [0x000110ba2000] >java.lang.Thread.State: WAITING (on object monitor) > at java.lang.Object.wait(Native Method) > - waiting on <0x0007ff842558> (a java.lang.ref.ReferenceQueue$Lock) > at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135) > - locked <0x0007ff842558> (a java.lang.ref.ReferenceQueue$Lock) > at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151) > at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:209) > "Reference Handler" daemon prio=5 tid=0x7f90eb811800 nid=0x3703 in > Object.wait() [0x000110a9f000] >java.lang.Thread.State: WAITING (on object monitor) > at java.lang.Object.wait(Native Method) > - waiting on <0x0007ff841f40> (a java.lang.ref.Reference$Lock) > at java.lang.Object.wait(Object.java:503) > at java.lang.ref.Refer
[jira] [Updated] (SQOOP-2872) Sqoop2: Enrich Kafka Connector resource file
[ https://issues.apache.org/jira/browse/SQOOP-2872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho updated SQOOP-2872: -- Attachment: SQOOP-2872.patch > Sqoop2: Enrich Kafka Connector resource file > > > Key: SQOOP-2872 > URL: https://issues.apache.org/jira/browse/SQOOP-2872 > Project: Sqoop > Issue Type: Sub-task > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > Attachments: SQOOP-2872.patch > > > Please see parent JIRA for more details. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (SQOOP-2872) Sqoop2: Enrich Kafka Connector resource file
Jarek Jarcec Cecho created SQOOP-2872: - Summary: Sqoop2: Enrich Kafka Connector resource file Key: SQOOP-2872 URL: https://issues.apache.org/jira/browse/SQOOP-2872 Project: Sqoop Issue Type: Sub-task Reporter: Jarek Jarcec Cecho Assignee: Jarek Jarcec Cecho Fix For: 1.99.7 Attachments: SQOOP-2872.patch Please see parent JIRA for more details. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (SQOOP-2871) Sqoop2: Enrich FTP Connector resource file
[ https://issues.apache.org/jira/browse/SQOOP-2871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho updated SQOOP-2871: -- Attachment: SQOOP-2871.patch > Sqoop2: Enrich FTP Connector resource file > -- > > Key: SQOOP-2871 > URL: https://issues.apache.org/jira/browse/SQOOP-2871 > Project: Sqoop > Issue Type: Sub-task > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > Attachments: SQOOP-2871.patch > > > Please see parent JIRA for details. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (SQOOP-2871) Sqoop2: Enrich FTP Connector resource file
Jarek Jarcec Cecho created SQOOP-2871: - Summary: Sqoop2: Enrich FTP Connector resource file Key: SQOOP-2871 URL: https://issues.apache.org/jira/browse/SQOOP-2871 Project: Sqoop Issue Type: Sub-task Reporter: Jarek Jarcec Cecho Assignee: Jarek Jarcec Cecho Fix For: 1.99.7 Please see parent JIRA for details. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2856) Sqoop2: Enrich HDFS Connector resource file
[ https://issues.apache.org/jira/browse/SQOOP-2856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15181775#comment-15181775 ] Jarek Jarcec Cecho commented on SQOOP-2856: --- And restarted again. > Sqoop2: Enrich HDFS Connector resource file > --- > > Key: SQOOP-2856 > URL: https://issues.apache.org/jira/browse/SQOOP-2856 > Project: Sqoop > Issue Type: Sub-task > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > Attachments: SQOOP-2856.patch > > > Please see parent JIRA for more details about what -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2848) Sqoop2: RESTiliency: Simplify JobRequestHandler.getJobs similarly as was done for getLinks
[ https://issues.apache.org/jira/browse/SQOOP-2848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15181773#comment-15181773 ] Jarek Jarcec Cecho commented on SQOOP-2848: --- I don't think that the failure is relevant, will restart the hook. > Sqoop2: RESTiliency: Simplify JobRequestHandler.getJobs similarly as was done > for getLinks > -- > > Key: SQOOP-2848 > URL: https://issues.apache.org/jira/browse/SQOOP-2848 > Project: Sqoop > Issue Type: Sub-task > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > Attachments: SQOOP-2848.patch, SQOOP-2848.patch, SQOOP-2848.patch, > SQOOP-2848.patch > > > I would like to simplify the {{JobRequestHandler.getJobs}} similarly as we've > changed for {{LinkRequestHandler.getLinks}} back in SQOOP-2670. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (SQOOP-2853) Sqoop2: Refactor TableDisplayer to be used in document generation
[ https://issues.apache.org/jira/browse/SQOOP-2853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho updated SQOOP-2853: -- Attachment: SQOOP-2853.patch > Sqoop2: Refactor TableDisplayer to be used in document generation > - > > Key: SQOOP-2853 > URL: https://issues.apache.org/jira/browse/SQOOP-2853 > Project: Sqoop > Issue Type: Improvement > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > Attachments: SQOOP-2853.patch, SQOOP-2853.patch, SQOOP-2853.patch > > > I would like to refactor the existing > {{[TableDisplayer|https://github.com/apache/sqoop/blob/sqoop2/shell/src/main/java/org/apache/sqoop/shell/utils/TableDisplayer.java]}}, > so that it can generate tables for documentation as well. Right now that > class is too tight with the way shell works. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (SQOOP-2870) Sqoop2: RESTiliency: Add tests for DriverHandler
Jarek Jarcec Cecho created SQOOP-2870: - Summary: Sqoop2: RESTiliency: Add tests for DriverHandler Key: SQOOP-2870 URL: https://issues.apache.org/jira/browse/SQOOP-2870 Project: Sqoop Issue Type: Sub-task Reporter: Jarek Jarcec Cecho Assignee: Jarek Jarcec Cecho Fix For: 1.99.7 Attachments: SQOOP-2870.patch I would like to addd small test case covering {{DriverHandler}} module similarly as we're building for other REST endpoints. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (SQOOP-2870) Sqoop2: RESTiliency: Add tests for DriverHandler
[ https://issues.apache.org/jira/browse/SQOOP-2870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho updated SQOOP-2870: -- Attachment: SQOOP-2870.patch > Sqoop2: RESTiliency: Add tests for DriverHandler > > > Key: SQOOP-2870 > URL: https://issues.apache.org/jira/browse/SQOOP-2870 > Project: Sqoop > Issue Type: Sub-task > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > Attachments: SQOOP-2870.patch > > > I would like to addd small test case covering {{DriverHandler}} module > similarly as we're building for other REST endpoints. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2856) Sqoop2: Enrich HDFS Connector resource file
[ https://issues.apache.org/jira/browse/SQOOP-2856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15180688#comment-15180688 ] Jarek Jarcec Cecho commented on SQOOP-2856: --- I've restarted the hook. > Sqoop2: Enrich HDFS Connector resource file > --- > > Key: SQOOP-2856 > URL: https://issues.apache.org/jira/browse/SQOOP-2856 > Project: Sqoop > Issue Type: Sub-task > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > Attachments: SQOOP-2856.patch > > > Please see parent JIRA for more details about what -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (SQOOP-2853) Sqoop2: Refactor TableDisplayer to be used in document generation
[ https://issues.apache.org/jira/browse/SQOOP-2853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho updated SQOOP-2853: -- Attachment: SQOOP-2853.patch > Sqoop2: Refactor TableDisplayer to be used in document generation > - > > Key: SQOOP-2853 > URL: https://issues.apache.org/jira/browse/SQOOP-2853 > Project: Sqoop > Issue Type: Improvement > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > Attachments: SQOOP-2853.patch, SQOOP-2853.patch > > > I would like to refactor the existing > {{[TableDisplayer|https://github.com/apache/sqoop/blob/sqoop2/shell/src/main/java/org/apache/sqoop/shell/utils/TableDisplayer.java]}}, > so that it can generate tables for documentation as well. Right now that > class is too tight with the way shell works. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (SQOOP-2848) Sqoop2: RESTiliency: Simplify JobRequestHandler.getJobs similarly as was done for getLinks
[ https://issues.apache.org/jira/browse/SQOOP-2848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho updated SQOOP-2848: -- Attachment: SQOOP-2848.patch > Sqoop2: RESTiliency: Simplify JobRequestHandler.getJobs similarly as was done > for getLinks > -- > > Key: SQOOP-2848 > URL: https://issues.apache.org/jira/browse/SQOOP-2848 > Project: Sqoop > Issue Type: Sub-task > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > Attachments: SQOOP-2848.patch, SQOOP-2848.patch, SQOOP-2848.patch, > SQOOP-2848.patch > > > I would like to simplify the {{JobRequestHandler.getJobs}} similarly as we've > changed for {{LinkRequestHandler.getLinks}} back in SQOOP-2670. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (SQOOP-2848) Sqoop2: RESTiliency: Simplify JobRequestHandler.getJobs similarly as was done for getLinks
[ https://issues.apache.org/jira/browse/SQOOP-2848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho updated SQOOP-2848: -- Attachment: SQOOP-2848.patch > Sqoop2: RESTiliency: Simplify JobRequestHandler.getJobs similarly as was done > for getLinks > -- > > Key: SQOOP-2848 > URL: https://issues.apache.org/jira/browse/SQOOP-2848 > Project: Sqoop > Issue Type: Sub-task > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > Attachments: SQOOP-2848.patch, SQOOP-2848.patch, SQOOP-2848.patch > > > I would like to simplify the {{JobRequestHandler.getJobs}} similarly as we've > changed for {{LinkRequestHandler.getLinks}} back in SQOOP-2670. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (SQOOP-2868) Sqoop2: Introduce link and job validations
Jarek Jarcec Cecho created SQOOP-2868: - Summary: Sqoop2: Introduce link and job validations Key: SQOOP-2868 URL: https://issues.apache.org/jira/browse/SQOOP-2868 Project: Sqoop Issue Type: Bug Reporter: Jarek Jarcec Cecho Fix For: 1.99.7 As [~abrahamfine], correctly pointed out the "all" is a special keyword that we're using on our REST interface and therefore it should not be allowed link/job name. I'm sure that there will be other limitations that we should also enforce. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (SQOOP-2867) Sqoop2: ResTiliency: Define one single constant for all
Jarek Jarcec Cecho created SQOOP-2867: - Summary: Sqoop2: ResTiliency: Define one single constant for all Key: SQOOP-2867 URL: https://issues.apache.org/jira/browse/SQOOP-2867 Project: Sqoop Issue Type: Sub-task Reporter: Jarek Jarcec Cecho Assignee: Jarek Jarcec Cecho Fix For: 1.99.7 We're currently using string "all" in all Handlers that are supporting it. As [~abrahamfine] correctly [pointed out|https://reviews.apache.org/r/43783/], we should convert it to a shared constant. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2848) Sqoop2: RESTiliency: Simplify JobRequestHandler.getJobs similarly as was done for getLinks
[ https://issues.apache.org/jira/browse/SQOOP-2848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15180443#comment-15180443 ] Jarek Jarcec Cecho commented on SQOOP-2848: --- I've restarted precommit hook as we've resolved the timing out issue. > Sqoop2: RESTiliency: Simplify JobRequestHandler.getJobs similarly as was done > for getLinks > -- > > Key: SQOOP-2848 > URL: https://issues.apache.org/jira/browse/SQOOP-2848 > Project: Sqoop > Issue Type: Sub-task > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > Attachments: SQOOP-2848.patch, SQOOP-2848.patch > > > I would like to simplify the {{JobRequestHandler.getJobs}} similarly as we've > changed for {{LinkRequestHandler.getLinks}} back in SQOOP-2670. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2545) Sqoop2: RESTiliency: Provide tests for non-existing end points
[ https://issues.apache.org/jira/browse/SQOOP-2545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15180438#comment-15180438 ] Jarek Jarcec Cecho commented on SQOOP-2545: --- {{ParquetTest}} is flaky and {{S3Test}} is not really failing. > Sqoop2: RESTiliency: Provide tests for non-existing end points > -- > > Key: SQOOP-2545 > URL: https://issues.apache.org/jira/browse/SQOOP-2545 > Project: Sqoop > Issue Type: Sub-task > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > Attachments: SQOOP-2545.patch, SQOOP-2545.patch > > > I would like to see () what happens when I call some random non-existing > REST URLs to our server. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2866) Sqoop2: Add Abe Fine to committer list in our pom file
[ https://issues.apache.org/jira/browse/SQOOP-2866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15180419#comment-15180419 ] Jarek Jarcec Cecho commented on SQOOP-2866: --- Could you review [~abrahamfine] and commit this change to see if you have proper access? > Sqoop2: Add Abe Fine to committer list in our pom file > -- > > Key: SQOOP-2866 > URL: https://issues.apache.org/jira/browse/SQOOP-2866 > Project: Sqoop > Issue Type: Bug > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > Attachments: SQOOP-2866.patch > > > Now when [~abrahamfine] is committer we should update our committer list in > the root pom.xml file: > https://github.com/apache/sqoop/blob/sqoop2/pom.xml#L903 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (SQOOP-2866) Sqoop2: Add Abe Fine to committer list in our pom file
[ https://issues.apache.org/jira/browse/SQOOP-2866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho updated SQOOP-2866: -- Attachment: SQOOP-2866.patch > Sqoop2: Add Abe Fine to committer list in our pom file > -- > > Key: SQOOP-2866 > URL: https://issues.apache.org/jira/browse/SQOOP-2866 > Project: Sqoop > Issue Type: Bug > Reporter: Jarek Jarcec Cecho > Assignee: Jarek Jarcec Cecho > Fix For: 1.99.7 > > Attachments: SQOOP-2866.patch > > > Now when [~abrahamfine] is committer we should update our committer list in > the root pom.xml file: > https://github.com/apache/sqoop/blob/sqoop2/pom.xml#L903 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (SQOOP-2866) Sqoop2: Add Abe Fine to committer list in our pom file
Jarek Jarcec Cecho created SQOOP-2866: - Summary: Sqoop2: Add Abe Fine to committer list in our pom file Key: SQOOP-2866 URL: https://issues.apache.org/jira/browse/SQOOP-2866 Project: Sqoop Issue Type: Bug Reporter: Jarek Jarcec Cecho Assignee: Jarek Jarcec Cecho Fix For: 1.99.7 Now when [~abrahamfine] is committer we should update our committer list in the root pom.xml file: https://github.com/apache/sqoop/blob/sqoop2/pom.xml#L903 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SQOOP-2845) Sqoop2: Derive keystore password from a script passed to configuration
[ https://issues.apache.org/jira/browse/SQOOP-2845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15180185#comment-15180185 ] Jarek Jarcec Cecho commented on SQOOP-2845: --- {{ParquetTest}} flakiness is tracked by SQOOP-2857 and {{S3Test}} did not failed, it's just not running, hence overriding the hooks -1. > Sqoop2: Derive keystore password from a script passed to configuration > -- > > Key: SQOOP-2845 > URL: https://issues.apache.org/jira/browse/SQOOP-2845 > Project: Sqoop > Issue Type: Sub-task >Affects Versions: 1.99.6 >Reporter: Abraham Fine >Assignee: Abraham Fine > Attachments: SQOOP-2845.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)