[jira] [Comment Edited] (HADOOP-14683) FileStatus.compareTo binary compat issue between 2.7 and 2.8
[ https://issues.apache.org/jira/browse/HADOOP-14683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099531#comment-16099531 ] Akira Ajisaka edited comment on HADOOP-14683 at 7/25/17 5:50 AM: - It is not possible to add old overload and keep new api because of compile error. {noformat} [ERROR] /Users/ajisaka/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java:[332,14] name clash: compareTo(org.apache.hadoop.fs.FileStatus) in org.apache.hadoop.fs.FileStatus overrides a method whose erasure is the same as another method, yet neither overrides the other [ERROR] first method: compareTo(java.lang.Object) in org.apache.hadoop.fs.FileStatus [ERROR] second method: compareTo(T) in java.lang.Comparable {noformat} This API is {{@Public}} and {{@Stable}}, so I'm thinking it's good to revert HADOOP-12209 from branch-2, branch-2.8, and branch-2.8.2. was (Author: ajisakaa): It is not possible to add old overload and keep new api because of compile error. {noformat} [ERROR] /Users/ajisaka/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java:[332,14] name clash: compareTo(org.apache.hadoop.fs.FileStatus) in org.apache.hadoop.fs.FileStatus overrides a method whose erasure is the same as another method, yet neither overrides the other [ERROR] first method: compareTo(java.lang.Object) in org.apache.hadoop.fs.FileStatus [ERROR] second method: compareTo(T) in java.lang.Comparable {noformat} This API is {{@Public}} and {{@Stable}}, so I'm thinking it's good to revert HADOOP-12209 from branch-2 and branch-2.8. > FileStatus.compareTo binary compat issue between 2.7 and 2.8 > > > Key: HADOOP-14683 > URL: https://issues.apache.org/jira/browse/HADOOP-14683 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.0, 2.8.1 >Reporter: Sergey Shelukhin >Priority: Critical > > See HIVE-17133. Looks like the signature change is causing issues; according > to [~jnp] this is a public API. > Is it possible to add the old overload back (keeping the new one presumably) > in a point release on 2.8? That way we can avoid creating yet another shim > for this in Hive. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14683) FileStatus.compareTo binary compat issue between 2.7 and 2.8
[ https://issues.apache.org/jira/browse/HADOOP-14683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-14683: --- Affects Version/s: 2.8.0 2.8.1 Priority: Critical (was: Major) Target Version/s: 2.8.2 > FileStatus.compareTo binary compat issue between 2.7 and 2.8 > > > Key: HADOOP-14683 > URL: https://issues.apache.org/jira/browse/HADOOP-14683 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.0, 2.8.1 >Reporter: Sergey Shelukhin >Priority: Critical > > See HIVE-17133. Looks like the signature change is causing issues; according > to [~jnp] this is a public API. > Is it possible to add the old overload back (keeping the new one presumably) > in a point release on 2.8? That way we can avoid creating yet another shim > for this in Hive. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12209) Comparable type should be in FileStatus
[ https://issues.apache.org/jira/browse/HADOOP-12209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-12209: --- Hadoop Flags: Incompatible change This broke HADOOP-14683. Can we revert this from branch-2, branch-2.8, and branch-2.8.2? > Comparable type should be in FileStatus > --- > > Key: HADOOP-12209 > URL: https://issues.apache.org/jira/browse/HADOOP-12209 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.7.1 >Reporter: Yong Zhang >Assignee: Yong Zhang >Priority: Minor > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HADOOP-12209.patch, HADOOP-12209.patch > > > FileStatus implements Comparable interface without type, so > Collections.binarySearch not work for FileStatus list. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14672) Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, dom, etc.
[ https://issues.apache.org/jira/browse/HADOOP-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099532#comment-16099532 ] Hadoop QA commented on HADOOP-14672: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 20m 34s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s{color} | {color:green} hadoop-client-minicluster in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 41m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-14672 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12878694/HADOOP-14672.04.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux 4f51b6365ba8 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c98201b | | Default Java | 1.8.0_131 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12847/testReport/ | | modules | C: hadoop-client-modules/hadoop-client-minicluster U: hadoop-client-modules/hadoop-client-minicluster | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12847/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, > dom, etc. > -- > > Key: HADOOP-14672 > URL: https://issues.apache.org/jira/browse/HADOOP-14672 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Junping Du >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HADOOP-14672.02.patch, HADOOP-14672.03.patch, > HADOOP-14672.04.patch, HADOOP-14672.patch > > > The shaded hadoop-client-minicluster shouldn't include any unshaded > dependencies, but we can see: javax, dom, sax, etc. are all unshaded. > CC [~busbey] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail:
[jira] [Commented] (HADOOP-14683) FileStatus.compareTo binary compat issue between 2.7 and 2.8
[ https://issues.apache.org/jira/browse/HADOOP-14683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099531#comment-16099531 ] Akira Ajisaka commented on HADOOP-14683: It is not possible to add old overload and keep new api because of compile error. {noformat} [ERROR] /Users/ajisaka/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java:[332,14] name clash: compareTo(org.apache.hadoop.fs.FileStatus) in org.apache.hadoop.fs.FileStatus overrides a method whose erasure is the same as another method, yet neither overrides the other [ERROR] first method: compareTo(java.lang.Object) in org.apache.hadoop.fs.FileStatus [ERROR] second method: compareTo(T) in java.lang.Comparable {noformat} This API is {{@Public}} and {{@Stable}}, so I'm thinking it's good to revert HADOOP-12209 from branch-2 and branch-2.8. > FileStatus.compareTo binary compat issue between 2.7 and 2.8 > > > Key: HADOOP-14683 > URL: https://issues.apache.org/jira/browse/HADOOP-14683 > Project: Hadoop Common > Issue Type: Bug >Reporter: Sergey Shelukhin > > See HIVE-17133. Looks like the signature change is causing issues; according > to [~jnp] this is a public API. > Is it possible to add the old overload back (keeping the new one presumably) > in a point release on 2.8? That way we can avoid creating yet another shim > for this in Hive. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14659) UGI getShortUserName does not need to search the Subject
[ https://issues.apache.org/jira/browse/HADOOP-14659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-14659: --- Fix Version/s: (was: 2.8.3) 2.8.2 > UGI getShortUserName does not need to search the Subject > > > Key: HADOOP-14659 > URL: https://issues.apache.org/jira/browse/HADOOP-14659 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 2.0.0-alpha >Reporter: Daryn Sharp >Assignee: Daryn Sharp > Fix For: 2.9.0, 3.0.0-beta1, 2.8.2 > > Attachments: HADOOP-14659.patch > > > {{UGI#getShortUserName}} searches the subject for the {{User}} instance. > It's not cheap to iterate a synchronized set, copy matches into a new set, > then iterating that set. The UGI ctor already set the {{User}} into a final > field... -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14646) FileContextMainOperationsBaseTest#testListStatusFilterWithSomeMatches never runs
[ https://issues.apache.org/jira/browse/HADOOP-14646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-14646: --- Fix Version/s: (was: 2.8.3) 2.8.2 > FileContextMainOperationsBaseTest#testListStatusFilterWithSomeMatches never > runs > > > Key: HADOOP-14646 > URL: https://issues.apache.org/jira/browse/HADOOP-14646 > Project: Hadoop Common > Issue Type: Bug > Components: test >Reporter: Andras Bokor >Assignee: Andras Bokor >Priority: Minor > Fix For: 2.9.0, 3.0.0-beta1, 2.8.2 > > Attachments: HADOOP-14646.01.patch > > > {{@Test}} annotation is missing above > {{FileContextMainOperationsBaseTest#testListStatusFilterWithSomeMatches}} so > it never runs. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14629) Improve exception checking in FileContext related JUnit tests
[ https://issues.apache.org/jira/browse/HADOOP-14629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-14629: --- Fix Version/s: (was: 2.8.3) 2.8.2 > Improve exception checking in FileContext related JUnit tests > - > > Key: HADOOP-14629 > URL: https://issues.apache.org/jira/browse/HADOOP-14629 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, test >Reporter: Andras Bokor >Assignee: Andras Bokor > Fix For: 2.9.0, 3.0.0-beta1, 2.8.2 > > Attachments: HADOOP-14629.01.patch > > > {{FileContextMainOperationsBaseTest#rename}} and > {{TestHDFSFileContextMainOperations#rename}} do the same but different way. > * FileContextMainOperationsBaseTest is able to distingush exceptions > * TestHDFSFileContextMainOperations checks the files in case of error > We should use one rename method with both advantages. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13867) FilterFileSystem should override rename(.., options) to take effect of Rename options called via FilterFileSystem implementations
[ https://issues.apache.org/jira/browse/HADOOP-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-13867: --- Fix Version/s: (was: 2.8.3) > FilterFileSystem should override rename(.., options) to take effect of Rename > options called via FilterFileSystem implementations > - > > Key: HADOOP-13867 > URL: https://issues.apache.org/jira/browse/HADOOP-13867 > Project: Hadoop Common > Issue Type: Bug >Reporter: Vinayakumar B >Assignee: Vinayakumar B > Fix For: 2.9.0, 2.7.4, 3.0.0-alpha2, 2.8.2 > > Attachments: HADOOP-13867-01.patch > > > HDFS-8312 Added Rename.TO_TRASH option to add a security check before moving > to trash. > But for FilterFileSystem implementations since this rename(..options) is not > overridden, it uses default FileSystem implementation where Rename.TO_TRASH > option is not delegated to NameNode. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14672) Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, dom, etc.
[ https://issues.apache.org/jira/browse/HADOOP-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-14672: Status: Patch Available (was: Open) > Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, > dom, etc. > -- > > Key: HADOOP-14672 > URL: https://issues.apache.org/jira/browse/HADOOP-14672 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Junping Du >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HADOOP-14672.02.patch, HADOOP-14672.03.patch, > HADOOP-14672.04.patch, HADOOP-14672.patch > > > The shaded hadoop-client-minicluster shouldn't include any unshaded > dependencies, but we can see: javax, dom, sax, etc. are all unshaded. > CC [~busbey] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14672) Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, dom, etc.
[ https://issues.apache.org/jira/browse/HADOOP-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-14672: Status: Open (was: Patch Available) > Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, > dom, etc. > -- > > Key: HADOOP-14672 > URL: https://issues.apache.org/jira/browse/HADOOP-14672 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Junping Du >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HADOOP-14672.02.patch, HADOOP-14672.03.patch, > HADOOP-14672.04.patch, HADOOP-14672.patch > > > The shaded hadoop-client-minicluster shouldn't include any unshaded > dependencies, but we can see: javax, dom, sax, etc. are all unshaded. > CC [~busbey] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14597) Native compilation broken with OpenSSL-1.1.0 because EVP_CIPHER_CTX has been made opaque
[ https://issues.apache.org/jira/browse/HADOOP-14597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099468#comment-16099468 ] Hudson commented on HADOOP-14597: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12051 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12051/]) HADOOP-14597. Native compilation broken with OpenSSL-1.1.0. Contributed (raviprak: rev 94ca52ae9ec0ae04854d726bf2ac1bc457b96a9c) * (edit) hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c * (edit) hadoop-tools/hadoop-pipes/src/main/native/pipes/impl/HadoopPipes.cc > Native compilation broken with OpenSSL-1.1.0 because EVP_CIPHER_CTX has been > made opaque > > > Key: HADOOP-14597 > URL: https://issues.apache.org/jira/browse/HADOOP-14597 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 > Environment: openssl-1.1.0 >Reporter: Ravi Prakash >Assignee: Ravi Prakash > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14597.00.patch, HADOOP-14597.01.patch, > HADOOP-14597.02.patch, HADOOP-14597.03.patch, HADOOP-14597.04.patch > > > Trying to build Hadoop trunk on Fedora 26 which has openssl-devel-1.1.0 fails > with this error > {code}[WARNING] > /home/raviprak/Code/hadoop/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c: > In function ‘check_update_max_output_len’: > [WARNING] > /home/raviprak/Code/hadoop/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c:256:14: > error: dereferencing pointer to incomplete type ‘EVP_CIPHER_CTX {aka struct > evp_cipher_ctx_st}’ > [WARNING]if (context->flags & EVP_CIPH_NO_PADDING) { > [WARNING] ^~ > {code} > https://github.com/openssl/openssl/issues/962 mattcaswell says > {quote} > One of the primary differences between master (OpenSSL 1.1.0) and the 1.0.2 > version is that many types have been made opaque, i.e. applications are no > longer allowed to look inside the internals of the structures > {quote} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14518) Customize User-Agent header sent in HTTP/HTTPS requests by WASB.
[ https://issues.apache.org/jira/browse/HADOOP-14518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099458#comment-16099458 ] Shane Mainali commented on HADOOP-14518: Thanks [~jnp]! > Customize User-Agent header sent in HTTP/HTTPS requests by WASB. > > > Key: HADOOP-14518 > URL: https://issues.apache.org/jira/browse/HADOOP-14518 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.0.0-beta1 >Reporter: Georgi Chalakov >Assignee: Georgi Chalakov >Priority: Minor > Attachments: HADOOP-14518-01.patch, HADOOP-14518-01-test.txt, > HADOOP-14518-02.patch, HADOOP-14518-03.patch, HADOOP-14518-04.patch, > HADOOP-14518-05.patch, HADOOP-14518-06.patch, HADOOP-14518-branch-2.01.patch > > > WASB passes a User-Agent header to the Azure back-end. Right now, it uses the > default value set by the Azure Client SDK, so Hadoop traffic doesn't appear > any different from general Blob traffic. If we customize the User-Agent > header, then it will enable better troubleshooting and analysis by Azure > service. > The following configuration > > fs.azure.user.agent.prefix > MSFT > > set the user agent to > User-Agent: WASB/3.0.0-alpha4-SNAPSHOT (MSFT) Azure-Storage/4.2.0 > (JavaJRE 1.8.0_131; WindowsServer2012R2 6.3) > Test Results : > Tests run: 703, Failures: 0, Errors: 0, Skipped: 119 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14684) get rid of "skipCorrupt" flag
Sergey Shelukhin created HADOOP-14684: - Summary: get rid of "skipCorrupt" flag Key: HADOOP-14684 URL: https://issues.apache.org/jira/browse/HADOOP-14684 Project: Hadoop Common Issue Type: Bug Reporter: Sergey Shelukhin The error that caused the issue was a long time ago and it's probably ok to get rid of this flag. Perhaps we should provide a small tool to overwrite these files without the corrupt values. cc [~prasanth_j] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14683) FileStatus.compareTo binary compat issue between 2.7 and 2.8
[ https://issues.apache.org/jira/browse/HADOOP-14683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HADOOP-14683: -- Description: See HIVE-17133. Looks like the signature change is causing issues; according to [~jnp] this is a public API. Is it possible to add the old overload back (keeping the new one presumably) in a point release on 2.8? That way we can avoid creating yet another shim for this in Hive. was: See HIVE-17133. Looks like the signature change is causing issues; according to [~jnp] this is a public API. Is it possible to add the old overload back in a point release on 2.8? That way we can avoid creating yet another shim for this in Hive. > FileStatus.compareTo binary compat issue between 2.7 and 2.8 > > > Key: HADOOP-14683 > URL: https://issues.apache.org/jira/browse/HADOOP-14683 > Project: Hadoop Common > Issue Type: Bug >Reporter: Sergey Shelukhin > > See HIVE-17133. Looks like the signature change is causing issues; according > to [~jnp] this is a public API. > Is it possible to add the old overload back (keeping the new one presumably) > in a point release on 2.8? That way we can avoid creating yet another shim > for this in Hive. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14683) FileStatus.compareTo binary compat issue between 2.7 and 2.8
Sergey Shelukhin created HADOOP-14683: - Summary: FileStatus.compareTo binary compat issue between 2.7 and 2.8 Key: HADOOP-14683 URL: https://issues.apache.org/jira/browse/HADOOP-14683 Project: Hadoop Common Issue Type: Bug Reporter: Sergey Shelukhin See HIVE-17133. Looks like the signature change is causing issues; according to [~jnp] this is a public API. Is it possible to add the old overload back in a point release on 2.8? That way we can avoid creating yet another shim for this in Hive. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14089) Shaded Hadoop client runtime includes non-shaded classes
[ https://issues.apache.org/jira/browse/HADOOP-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099300#comment-16099300 ] Bharat Viswanadham commented on HADOOP-14089: - [~busbey] Output is clear to understand. It has taken 3 minutes 30 seconds on Mac machine to build and test the jars. > Shaded Hadoop client runtime includes non-shaded classes > > > Key: HADOOP-14089 > URL: https://issues.apache.org/jira/browse/HADOOP-14089 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: David Phillips >Assignee: Sean Busbey >Priority: Critical > Attachments: HADOOP-14089.WIP.0.patch > > > The jar includes things like {{assets}}, {{okio}}, {{javax/annotation}}, > {{javax/ws}}, {{mozilla}}, etc. > An easy way to verify this is to look at the contents of the jar: > {code} > jar tf hadoop-client-runtime-xxx.jar | sort | grep -v '^org/apache/hadoop' > {code} > For standard dependencies, such as the JSR 305 {{javax.annotation}} or JAX-RS > {{javax.ws}}, it makes sense for those to be normal dependencies in the POM > -- they are standard, so version conflicts shouldn't be a problem. The JSR > 305 annotations can be {{true}} since they aren't needed > at runtime (this is what Guava does). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14672) Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, dom, etc.
[ https://issues.apache.org/jira/browse/HADOOP-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099286#comment-16099286 ] Bharat Viswanadham edited comment on HADOOP-14672 at 7/25/17 12:05 AM: --- [~busbey] Yes, they are used in the tests internally. I don't know whether these will be used by downstream projects or not. So, do i need to update the patch to exclude them? cc [~djp] was (Author: bharatviswa): [~busbey] Yes, they are used in the tests internally. I don't know whether these will be used by downstream projects or not. So, do i need to update the patch to exclude them? > Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, > dom, etc. > -- > > Key: HADOOP-14672 > URL: https://issues.apache.org/jira/browse/HADOOP-14672 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Junping Du >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HADOOP-14672.02.patch, HADOOP-14672.03.patch, > HADOOP-14672.04.patch, HADOOP-14672.patch > > > The shaded hadoop-client-minicluster shouldn't include any unshaded > dependencies, but we can see: javax, dom, sax, etc. are all unshaded. > CC [~busbey] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14672) Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, dom, etc.
[ https://issues.apache.org/jira/browse/HADOOP-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099286#comment-16099286 ] Bharat Viswanadham edited comment on HADOOP-14672 at 7/25/17 12:04 AM: --- [~busbey] Yes, they are used in the tests internally. I don't know whether these will be used by downstream projects or not. So, do i need to update the patch to exclude them? was (Author: bharatviswa): [~busbey] Yes, they are used in the tests internally. I don't know whether these will be used by downstream projects or not. will update the patch to exclude them. > Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, > dom, etc. > -- > > Key: HADOOP-14672 > URL: https://issues.apache.org/jira/browse/HADOOP-14672 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Junping Du >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HADOOP-14672.02.patch, HADOOP-14672.03.patch, > HADOOP-14672.04.patch, HADOOP-14672.patch > > > The shaded hadoop-client-minicluster shouldn't include any unshaded > dependencies, but we can see: javax, dom, sax, etc. are all unshaded. > CC [~busbey] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14672) Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, dom, etc.
[ https://issues.apache.org/jira/browse/HADOOP-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099286#comment-16099286 ] Bharat Viswanadham edited comment on HADOOP-14672 at 7/25/17 12:02 AM: --- [~busbey] Yes, they are used in the tests internally. I don't know whether these will be used by downstream projects or not. will update the patch to exclude them. was (Author: bharatviswa): [~busbey] Yes, they are used in the tests internally. I don't know whether these will be used by downstream projects or not. will update the patch to exclude them. > Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, > dom, etc. > -- > > Key: HADOOP-14672 > URL: https://issues.apache.org/jira/browse/HADOOP-14672 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Junping Du >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HADOOP-14672.02.patch, HADOOP-14672.03.patch, > HADOOP-14672.04.patch, HADOOP-14672.patch > > > The shaded hadoop-client-minicluster shouldn't include any unshaded > dependencies, but we can see: javax, dom, sax, etc. are all unshaded. > CC [~busbey] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14672) Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, dom, etc.
[ https://issues.apache.org/jira/browse/HADOOP-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099286#comment-16099286 ] Bharat Viswanadham commented on HADOOP-14672: - [~busbey] Yes, they are used in the tests internally. I don't know whether these will be used by downstream projects or not. will update the patch to exclude them. > Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, > dom, etc. > -- > > Key: HADOOP-14672 > URL: https://issues.apache.org/jira/browse/HADOOP-14672 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Junping Du >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HADOOP-14672.02.patch, HADOOP-14672.03.patch, > HADOOP-14672.04.patch, HADOOP-14672.patch > > > The shaded hadoop-client-minicluster shouldn't include any unshaded > dependencies, but we can see: javax, dom, sax, etc. are all unshaded. > CC [~busbey] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14672) Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, dom, etc.
[ https://issues.apache.org/jira/browse/HADOOP-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099268#comment-16099268 ] Bharat Viswanadham edited comment on HADOOP-14672 at 7/24/17 11:52 PM: --- For the rest of the stuff, I am not sure, and those are properties files, I am not sure will it cause any issues? Do you mean testjar, testshell should be excluded from minicluster jar or you will update the script checking jar to include them, so that it will not result in the output? was (Author: bharatviswa): For the rest of the stuff, I am not sure, and those are properties files, I am not sure will it cause any issues? Do you mean testjar, testshell should be excluded from minicluster jar or you will update the script checking jar to include them, so that it will not result in the output? > Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, > dom, etc. > -- > > Key: HADOOP-14672 > URL: https://issues.apache.org/jira/browse/HADOOP-14672 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Junping Du >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HADOOP-14672.02.patch, HADOOP-14672.03.patch, > HADOOP-14672.04.patch, HADOOP-14672.patch > > > The shaded hadoop-client-minicluster shouldn't include any unshaded > dependencies, but we can see: javax, dom, sax, etc. are all unshaded. > CC [~busbey] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14682) cmake Makefiles in hadoop-common don't properly respect -Dopenssl.prefix
[ https://issues.apache.org/jira/browse/HADOOP-14682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099273#comment-16099273 ] Allen Wittenauer commented on HADOOP-14682: --- I think, but I'm not sure, that the basic problem is that the LD_LIBRARY_PATH/DYLD_LIBRARY_PATH/linked runpath isn't getting set correctly when the native libraries are getting tested. > cmake Makefiles in hadoop-common don't properly respect -Dopenssl.prefix > > > Key: HADOOP-14682 > URL: https://issues.apache.org/jira/browse/HADOOP-14682 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ravi Prakash > > Allen reported that while running tests, cmake didn't properly respect > -Dopenssl.prefix that would allow us to build and run the tests with > different versions of OpenSSL. > https://issues.apache.org/jira/browse/HADOOP-14597?focusedCommentId=16092114=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16092114 > I too encountered some funny stuff while trying to build with a non-default > OpenSSL library. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14672) Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, dom, etc.
[ https://issues.apache.org/jira/browse/HADOOP-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099271#comment-16099271 ] Sean Busbey commented on HADOOP-14672: -- I thought we were in agreement that they shouldn't be in the shaded jar. Why would we publish them to downstream? They're used in a test we run internally, we don't intend for them to be used downstream, right? > Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, > dom, etc. > -- > > Key: HADOOP-14672 > URL: https://issues.apache.org/jira/browse/HADOOP-14672 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Junping Du >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HADOOP-14672.02.patch, HADOOP-14672.03.patch, > HADOOP-14672.04.patch, HADOOP-14672.patch > > > The shaded hadoop-client-minicluster shouldn't include any unshaded > dependencies, but we can see: javax, dom, sax, etc. are all unshaded. > CC [~busbey] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14672) Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, dom, etc.
[ https://issues.apache.org/jira/browse/HADOOP-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099268#comment-16099268 ] Bharat Viswanadham commented on HADOOP-14672: - For the rest of the stuff, I am not sure, and those are properties files, I am not sure will it cause any issues? Do you mean testjar, testshell should be excluded from minicluster jar or you will update the script checking jar to include them, so that it will not result in the output? > Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, > dom, etc. > -- > > Key: HADOOP-14672 > URL: https://issues.apache.org/jira/browse/HADOOP-14672 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Junping Du >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HADOOP-14672.02.patch, HADOOP-14672.03.patch, > HADOOP-14672.04.patch, HADOOP-14672.patch > > > The shaded hadoop-client-minicluster shouldn't include any unshaded > dependencies, but we can see: javax, dom, sax, etc. are all unshaded. > CC [~busbey] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14682) cmake Makefiles in hadoop-common don't properly respect -Dopenssl.prefix
[ https://issues.apache.org/jira/browse/HADOOP-14682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099258#comment-16099258 ] Ravi Prakash commented on HADOOP-14682: --- I briefly tried stuff around https://github.com/apache/hadoop/blob/94ca52ae9ec0ae04854d726bf2ac1bc457b96a9c/hadoop-common-project/hadoop-common/src/CMakeLists.txt#L171 but still failed. > cmake Makefiles in hadoop-common don't properly respect -Dopenssl.prefix > > > Key: HADOOP-14682 > URL: https://issues.apache.org/jira/browse/HADOOP-14682 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ravi Prakash > > Allen reported that while running tests, cmake didn't properly respect > -Dopenssl.prefix that would allow us to build and run the tests with > different versions of OpenSSL. > https://issues.apache.org/jira/browse/HADOOP-14597?focusedCommentId=16092114=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16092114 > I too encountered some funny stuff while trying to build with a non-default > OpenSSL library. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14597) Native compilation broken with OpenSSL-1.1.0 because EVP_CIPHER_CTX has been made opaque
[ https://issues.apache.org/jira/browse/HADOOP-14597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099254#comment-16099254 ] Ravi Prakash commented on HADOOP-14597: --- I've filed https://issues.apache.org/jira/browse/HADOOP-14682 to document the issue in the Cmake files > Native compilation broken with OpenSSL-1.1.0 because EVP_CIPHER_CTX has been > made opaque > > > Key: HADOOP-14597 > URL: https://issues.apache.org/jira/browse/HADOOP-14597 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 > Environment: openssl-1.1.0 >Reporter: Ravi Prakash >Assignee: Ravi Prakash > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14597.00.patch, HADOOP-14597.01.patch, > HADOOP-14597.02.patch, HADOOP-14597.03.patch, HADOOP-14597.04.patch > > > Trying to build Hadoop trunk on Fedora 26 which has openssl-devel-1.1.0 fails > with this error > {code}[WARNING] > /home/raviprak/Code/hadoop/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c: > In function ‘check_update_max_output_len’: > [WARNING] > /home/raviprak/Code/hadoop/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c:256:14: > error: dereferencing pointer to incomplete type ‘EVP_CIPHER_CTX {aka struct > evp_cipher_ctx_st}’ > [WARNING]if (context->flags & EVP_CIPH_NO_PADDING) { > [WARNING] ^~ > {code} > https://github.com/openssl/openssl/issues/962 mattcaswell says > {quote} > One of the primary differences between master (OpenSSL 1.1.0) and the 1.0.2 > version is that many types have been made opaque, i.e. applications are no > longer allowed to look inside the internals of the structures > {quote} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14682) cmake Makefiles in hadoop-common don't properly respect -Dopenssl.prefix
Ravi Prakash created HADOOP-14682: - Summary: cmake Makefiles in hadoop-common don't properly respect -Dopenssl.prefix Key: HADOOP-14682 URL: https://issues.apache.org/jira/browse/HADOOP-14682 Project: Hadoop Common Issue Type: Bug Reporter: Ravi Prakash Allen reported that while running tests, cmake didn't properly respect -Dopenssl.prefix that would allow us to build and run the tests with different versions of OpenSSL. https://issues.apache.org/jira/browse/HADOOP-14597?focusedCommentId=16092114=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16092114 I too encountered some funny stuff while trying to build with a non-default OpenSSL library. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14672) Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, dom, etc.
[ https://issues.apache.org/jira/browse/HADOOP-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099245#comment-16099245 ] Sean Busbey commented on HADOOP-14672: -- copying over here from HADOOP-14089, I agree that we should exclude the testjar/testshell classes from the minicluster artifact. How about the rest of the stuff called out by HADOOP-14089's new test? > Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, > dom, etc. > -- > > Key: HADOOP-14672 > URL: https://issues.apache.org/jira/browse/HADOOP-14672 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Junping Du >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HADOOP-14672.02.patch, HADOOP-14672.03.patch, > HADOOP-14672.04.patch, HADOOP-14672.patch > > > The shaded hadoop-client-minicluster shouldn't include any unshaded > dependencies, but we can see: javax, dom, sax, etc. are all unshaded. > CC [~busbey] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14089) Shaded Hadoop client runtime includes non-shaded classes
[ https://issues.apache.org/jira/browse/HADOOP-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099243#comment-16099243 ] Sean Busbey commented on HADOOP-14089: -- excellent. glad it works. Was it fast enough? is the output easy enough to understand wrt what the problem is and what must be done for the problem to be solved? (FWIW, I agree that testjar and testshell stuff needn't be included in the minicluster jar. I'll add a comment to HADOOP-14672 stating as much.) > Shaded Hadoop client runtime includes non-shaded classes > > > Key: HADOOP-14089 > URL: https://issues.apache.org/jira/browse/HADOOP-14089 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: David Phillips >Assignee: Sean Busbey >Priority: Critical > Attachments: HADOOP-14089.WIP.0.patch > > > The jar includes things like {{assets}}, {{okio}}, {{javax/annotation}}, > {{javax/ws}}, {{mozilla}}, etc. > An easy way to verify this is to look at the contents of the jar: > {code} > jar tf hadoop-client-runtime-xxx.jar | sort | grep -v '^org/apache/hadoop' > {code} > For standard dependencies, such as the JSR 305 {{javax.annotation}} or JAX-RS > {{javax.ws}}, it makes sense for those to be normal dependencies in the POM > -- they are standard, so version conflicts shouldn't be a problem. The JSR > 305 annotations can be {{true}} since they aren't needed > at runtime (this is what Guava does). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Closed] (HADOOP-14597) Native compilation broken with OpenSSL-1.1.0 because EVP_CIPHER_CTX has been made opaque
[ https://issues.apache.org/jira/browse/HADOOP-14597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravi Prakash closed HADOOP-14597. - > Native compilation broken with OpenSSL-1.1.0 because EVP_CIPHER_CTX has been > made opaque > > > Key: HADOOP-14597 > URL: https://issues.apache.org/jira/browse/HADOOP-14597 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 > Environment: openssl-1.1.0 >Reporter: Ravi Prakash >Assignee: Ravi Prakash > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14597.00.patch, HADOOP-14597.01.patch, > HADOOP-14597.02.patch, HADOOP-14597.03.patch, HADOOP-14597.04.patch > > > Trying to build Hadoop trunk on Fedora 26 which has openssl-devel-1.1.0 fails > with this error > {code}[WARNING] > /home/raviprak/Code/hadoop/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c: > In function ‘check_update_max_output_len’: > [WARNING] > /home/raviprak/Code/hadoop/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c:256:14: > error: dereferencing pointer to incomplete type ‘EVP_CIPHER_CTX {aka struct > evp_cipher_ctx_st}’ > [WARNING]if (context->flags & EVP_CIPH_NO_PADDING) { > [WARNING] ^~ > {code} > https://github.com/openssl/openssl/issues/962 mattcaswell says > {quote} > One of the primary differences between master (OpenSSL 1.1.0) and the 1.0.2 > version is that many types have been made opaque, i.e. applications are no > longer allowed to look inside the internals of the structures > {quote} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14397) Pull up the builder pattern to FileSystem and add AbstractContractCreateTest for it
[ https://issues.apache.org/jira/browse/HADOOP-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HADOOP-14397: --- Attachment: HADOOP-14397.001.patch Thanks for the reviews, [~fabbri]! Updated the patch to fix {{TestRawlocalContractAppend}}, which was due to missing changes of handling append in {{FileSystem#FileSystemDataOutputStreamBuilder#builder()}}. > Pull up the builder pattern to FileSystem and add AbstractContractCreateTest > for it > --- > > Key: HADOOP-14397 > URL: https://issues.apache.org/jira/browse/HADOOP-14397 > Project: Hadoop Common > Issue Type: Sub-task > Components: common, fs, hdfs-client >Affects Versions: 2.9.0, 3.0.0-alpha3 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu > Attachments: HADOOP-14397.000.patch, HADOOP-14397.001.patch > > > After reach the stability of the Builder APIs, we should promote the API from > {{DistributedFileSystem}} to {{FileSystem}}, and add necessary contract tests > to cover the API for all file systems. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14597) Native compilation broken with OpenSSL-1.1.0 because EVP_CIPHER_CTX has been made opaque
[ https://issues.apache.org/jira/browse/HADOOP-14597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravi Prakash updated HADOOP-14597: -- Resolution: Fixed Fix Version/s: 3.0.0-beta1 Status: Resolved (was: Patch Available) > Native compilation broken with OpenSSL-1.1.0 because EVP_CIPHER_CTX has been > made opaque > > > Key: HADOOP-14597 > URL: https://issues.apache.org/jira/browse/HADOOP-14597 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 > Environment: openssl-1.1.0 >Reporter: Ravi Prakash >Assignee: Ravi Prakash > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14597.00.patch, HADOOP-14597.01.patch, > HADOOP-14597.02.patch, HADOOP-14597.03.patch, HADOOP-14597.04.patch > > > Trying to build Hadoop trunk on Fedora 26 which has openssl-devel-1.1.0 fails > with this error > {code}[WARNING] > /home/raviprak/Code/hadoop/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c: > In function ‘check_update_max_output_len’: > [WARNING] > /home/raviprak/Code/hadoop/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c:256:14: > error: dereferencing pointer to incomplete type ‘EVP_CIPHER_CTX {aka struct > evp_cipher_ctx_st}’ > [WARNING]if (context->flags & EVP_CIPH_NO_PADDING) { > [WARNING] ^~ > {code} > https://github.com/openssl/openssl/issues/962 mattcaswell says > {quote} > One of the primary differences between master (OpenSSL 1.1.0) and the 1.0.2 > version is that many types have been made opaque, i.e. applications are no > longer allowed to look inside the internals of the structures > {quote} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14597) Native compilation broken with OpenSSL-1.1.0 because EVP_CIPHER_CTX has been made opaque
[ https://issues.apache.org/jira/browse/HADOOP-14597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099227#comment-16099227 ] Ravi Prakash commented on HADOOP-14597: --- Thank you for the review and comments Allen. Committing shortly. > Native compilation broken with OpenSSL-1.1.0 because EVP_CIPHER_CTX has been > made opaque > > > Key: HADOOP-14597 > URL: https://issues.apache.org/jira/browse/HADOOP-14597 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 > Environment: openssl-1.1.0 >Reporter: Ravi Prakash >Assignee: Ravi Prakash > Attachments: HADOOP-14597.00.patch, HADOOP-14597.01.patch, > HADOOP-14597.02.patch, HADOOP-14597.03.patch, HADOOP-14597.04.patch > > > Trying to build Hadoop trunk on Fedora 26 which has openssl-devel-1.1.0 fails > with this error > {code}[WARNING] > /home/raviprak/Code/hadoop/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c: > In function ‘check_update_max_output_len’: > [WARNING] > /home/raviprak/Code/hadoop/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c:256:14: > error: dereferencing pointer to incomplete type ‘EVP_CIPHER_CTX {aka struct > evp_cipher_ctx_st}’ > [WARNING]if (context->flags & EVP_CIPH_NO_PADDING) { > [WARNING] ^~ > {code} > https://github.com/openssl/openssl/issues/962 mattcaswell says > {quote} > One of the primary differences between master (OpenSSL 1.1.0) and the 1.0.2 > version is that many types have been made opaque, i.e. applications are no > longer allowed to look inside the internals of the structures > {quote} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14623) fixed some bugs in KafkaSink
[ https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099221#comment-16099221 ] Bharat Viswanadham commented on HADOOP-14623: - [~Hongyuan Li] Patch looks good to me. +1 (non-binding) One question: why you have changed Key type to byte[] from integer. Any reason for this from changing original code? (But any way, I think here it does not matter, as when we are creating Record Object, they Key is not passed, so it will default to a null value) > fixed some bugs in KafkaSink > - > > Key: HADOOP-14623 > URL: https://issues.apache.org/jira/browse/HADOOP-14623 > Project: Hadoop Common > Issue Type: Bug > Components: common, tools >Affects Versions: 3.0.0-alpha3 >Reporter: Hongyuan Li >Assignee: Hongyuan Li > Attachments: HADOOP-14623-001.patch, HADOOP-14623-002.patch, > HADOOP-14623-003.patch, HADOOP-14623-004.patch > > > {{KafkaSink}}#{{init}} should set ack to *1* to make sure the message has > been written to the broker at least. > current code list below: > {code} > > props.put("request.required.acks", "0"); > {code} > *Update* > find another bug about this class, {{key.serializer}} used > {{org.apache.kafka.common.serialization.ByteArraySerializer}}, however, the > key properties of Producer is Integer, codes list below: > {code} > props.put("key.serializer", > "org.apache.kafka.common.serialization.ByteArraySerializer"); > … > producer = new KafkaProducer(props); > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HADOOP-14672) Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, dom, etc.
[ https://issues.apache.org/jira/browse/HADOOP-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-14672: Comment: was deleted (was: [~busbey] I tried your patch. below is the output. testJar,testShell classes are from hadoop only (hadoop-mapreduce-client-jobclient) test. So, for this I think these should not be shaded. Found artifact with unexpected contents: '/Users/bviswanadham/workspace/hadoop/hadoop-client-modules/hadoop-client-minicluster/target/hadoop-client-minicluster-3.0.0-beta1-SNAPSHOT.jar' Please check the following and either correct the build or update the allowed list with reasoning. capacity-scheduler.xml krb5.conf log4j.properties about.html testjar/ testjar/ClassWordCount$MapClass.class testjar/ClassWordCount$Reduce.class testjar/ClassWordCount.class testjar/CustomOutputCommitter.class testjar/ExternalIdentityReducer.class testjar/ExternalMapperReducer.class testjar/ExternalWritable.class testjar/JobKillCommitter$CommitterWithFailCleanup.class testjar/JobKillCommitter$CommitterWithFailSetup.class testjar/JobKillCommitter$CommitterWithNoError.class testjar/JobKillCommitter$MapperFail.class testjar/JobKillCommitter$MapperPass.class testjar/JobKillCommitter$MapperPassSleep.class testjar/JobKillCommitter$ReducerFail.class testjar/JobKillCommitter$ReducerPass.class testjar/JobKillCommitter.class testjar/UserNamePermission$UserNameMapper.class testjar/UserNamePermission$UserNameReducer.class testjar/UserNamePermission.class testshell/ testshell/ExternalMapReduce$MapClass.class testshell/ExternalMapReduce$Reduce.class testshell/ExternalMapReduce.class .options jdtCompilerAdapter.jar plugin.properties plugin.xml container-log4j.properties java.policy catalog.cat javaee_5.xsd javaee_6.xsd javaee_web_services_client_1_2.xsd javaee_web_services_client_1_3.xsd jsp_2_1.xsd jsp_2_2.xsd web-app_2_5.xsd web-app_3_0.xsd web-common_3_0.xsd xml.xsd ) > Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, > dom, etc. > -- > > Key: HADOOP-14672 > URL: https://issues.apache.org/jira/browse/HADOOP-14672 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Junping Du >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HADOOP-14672.02.patch, HADOOP-14672.03.patch, > HADOOP-14672.04.patch, HADOOP-14672.patch > > > The shaded hadoop-client-minicluster shouldn't include any unshaded > dependencies, but we can see: javax, dom, sax, etc. are all unshaded. > CC [~busbey] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14089) Shaded Hadoop client runtime includes non-shaded classes
[ https://issues.apache.org/jira/browse/HADOOP-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099216#comment-16099216 ] Bharat Viswanadham commented on HADOOP-14089: - [~busbey] I tried your patch. below is the output. testJar,testShell classes are from hadoop only (hadoop-mapreduce-client-jobclient) test. So, for this I think these should not be shaded. Found artifact with unexpected contents: '/Users/bviswanadham/workspace/hadoop/hadoop-client-modules/hadoop-client-minicluster/target/hadoop-client-minicluster-3.0.0-beta1-SNAPSHOT.jar' Please check the following and either correct the build or update the allowed list with reasoning. capacity-scheduler.xml krb5.conf log4j.properties about.html testjar/ testjar/ClassWordCount$MapClass.class testjar/ClassWordCount$Reduce.class testjar/ClassWordCount.class testjar/CustomOutputCommitter.class testjar/ExternalIdentityReducer.class testjar/ExternalMapperReducer.class testjar/ExternalWritable.class testjar/JobKillCommitter$CommitterWithFailCleanup.class testjar/JobKillCommitter$CommitterWithFailSetup.class testjar/JobKillCommitter$CommitterWithNoError.class testjar/JobKillCommitter$MapperFail.class testjar/JobKillCommitter$MapperPass.class testjar/JobKillCommitter$MapperPassSleep.class testjar/JobKillCommitter$ReducerFail.class testjar/JobKillCommitter$ReducerPass.class testjar/JobKillCommitter.class testjar/UserNamePermission$UserNameMapper.class testjar/UserNamePermission$UserNameReducer.class testjar/UserNamePermission.class testshell/ testshell/ExternalMapReduce$MapClass.class testshell/ExternalMapReduce$Reduce.class testshell/ExternalMapReduce.class .options jdtCompilerAdapter.jar plugin.properties plugin.xml container-log4j.properties java.policy catalog.cat javaee_5.xsd javaee_6.xsd javaee_web_services_client_1_2.xsd javaee_web_services_client_1_3.xsd jsp_2_1.xsd jsp_2_2.xsd web-app_2_5.xsd web-app_3_0.xsd web-common_3_0.xsd xml.xsd > Shaded Hadoop client runtime includes non-shaded classes > > > Key: HADOOP-14089 > URL: https://issues.apache.org/jira/browse/HADOOP-14089 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: David Phillips >Assignee: Sean Busbey >Priority: Critical > Attachments: HADOOP-14089.WIP.0.patch > > > The jar includes things like {{assets}}, {{okio}}, {{javax/annotation}}, > {{javax/ws}}, {{mozilla}}, etc. > An easy way to verify this is to look at the contents of the jar: > {code} > jar tf hadoop-client-runtime-xxx.jar | sort | grep -v '^org/apache/hadoop' > {code} > For standard dependencies, such as the JSR 305 {{javax.annotation}} or JAX-RS > {{javax.ws}}, it makes sense for those to be normal dependencies in the POM > -- they are standard, so version conflicts shouldn't be a problem. The JSR > 305 annotations can be {{true}} since they aren't needed > at runtime (this is what Guava does). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14676) Wrong default value for "fs.du.interval"
[ https://issues.apache.org/jira/browse/HADOOP-14676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HADOOP-14676: - Description: According to {{core-default.xml}} the default value of {{fs.df.interval = 60 sec}}. But the implementation of {{DF}} substitutes 3 sec instead. The problem is that {{DF}} uses outdated constant {{DF.DF_INTERVAL_DEFAULT}} instead of the correct one {{CommonConfigurationKeysPublic.FS_DF_INTERVAL_DEFAULT}}. (was: According to {{core-default.xml}} the default value of {{fs.du.interval = 60 sec}}. But the implementation of {{DF}} substitutes 3 sec instead. The problem is that {{DF}} uses outdated constant {{DF.DF_INTERVAL_DEFAULT}} instead of the correct one {{CommonConfigurationKeysPublic.FS_DU_INTERVAL_DEFAULT}}.) > Wrong default value for "fs.du.interval" > > > Key: HADOOP-14676 > URL: https://issues.apache.org/jira/browse/HADOOP-14676 > Project: Hadoop Common > Issue Type: Bug > Components: common, conf, fs >Affects Versions: 2.6.1 >Reporter: Konstantin Shvachko >Assignee: Erik Krogen > > According to {{core-default.xml}} the default value of {{fs.df.interval = 60 > sec}}. But the implementation of {{DF}} substitutes 3 sec instead. The > problem is that {{DF}} uses outdated constant {{DF.DF_INTERVAL_DEFAULT}} > instead of the correct one > {{CommonConfigurationKeysPublic.FS_DF_INTERVAL_DEFAULT}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14518) Customize User-Agent header sent in HTTP/HTTPS requests by WASB.
[ https://issues.apache.org/jira/browse/HADOOP-14518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jitendra Nath Pandey updated HADOOP-14518: -- Attachment: HADOOP-14518-branch-2.01.patch Attaching branch-2 patch for pre-commit run. > Customize User-Agent header sent in HTTP/HTTPS requests by WASB. > > > Key: HADOOP-14518 > URL: https://issues.apache.org/jira/browse/HADOOP-14518 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.0.0-beta1 >Reporter: Georgi Chalakov >Assignee: Georgi Chalakov >Priority: Minor > Attachments: HADOOP-14518-01.patch, HADOOP-14518-01-test.txt, > HADOOP-14518-02.patch, HADOOP-14518-03.patch, HADOOP-14518-04.patch, > HADOOP-14518-05.patch, HADOOP-14518-06.patch, HADOOP-14518-branch-2.01.patch > > > WASB passes a User-Agent header to the Azure back-end. Right now, it uses the > default value set by the Azure Client SDK, so Hadoop traffic doesn't appear > any different from general Blob traffic. If we customize the User-Agent > header, then it will enable better troubleshooting and analysis by Azure > service. > The following configuration > > fs.azure.user.agent.prefix > MSFT > > set the user agent to > User-Agent: WASB/3.0.0-alpha4-SNAPSHOT (MSFT) Azure-Storage/4.2.0 > (JavaJRE 1.8.0_131; WindowsServer2012R2 6.3) > Test Results : > Tests run: 703, Failures: 0, Errors: 0, Skipped: 119 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14672) Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, dom, etc.
[ https://issues.apache.org/jira/browse/HADOOP-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099179#comment-16099179 ] Bharat Viswanadham commented on HADOOP-14672: - [~busbey] I tried your patch. below is the output. testJar classes are from hadoop only (hadoop-mapreduce-client-jobclient) test. So, for this I think these should not be shaded. Found artifact with unexpected contents: '/Users/bviswanadham/workspace/hadoop/hadoop-client-modules/hadoop-client-minicluster/target/hadoop-client-minicluster-3.0.0-beta1-SNAPSHOT.jar' Please check the following and either correct the build or update the allowed list with reasoning. capacity-scheduler.xml krb5.conf log4j.properties about.html testjar/ testjar/ClassWordCount$MapClass.class testjar/ClassWordCount$Reduce.class testjar/ClassWordCount.class testjar/CustomOutputCommitter.class testjar/ExternalIdentityReducer.class testjar/ExternalMapperReducer.class testjar/ExternalWritable.class testjar/JobKillCommitter$CommitterWithFailCleanup.class testjar/JobKillCommitter$CommitterWithFailSetup.class testjar/JobKillCommitter$CommitterWithNoError.class testjar/JobKillCommitter$MapperFail.class testjar/JobKillCommitter$MapperPass.class testjar/JobKillCommitter$MapperPassSleep.class testjar/JobKillCommitter$ReducerFail.class testjar/JobKillCommitter$ReducerPass.class testjar/JobKillCommitter.class testjar/UserNamePermission$UserNameMapper.class testjar/UserNamePermission$UserNameReducer.class testjar/UserNamePermission.class testshell/ testshell/ExternalMapReduce$MapClass.class testshell/ExternalMapReduce$Reduce.class testshell/ExternalMapReduce.class .options jdtCompilerAdapter.jar plugin.properties plugin.xml container-log4j.properties java.policy catalog.cat javaee_5.xsd javaee_6.xsd javaee_web_services_client_1_2.xsd javaee_web_services_client_1_3.xsd jsp_2_1.xsd jsp_2_2.xsd web-app_2_5.xsd web-app_3_0.xsd web-common_3_0.xsd xml.xsd > Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, > dom, etc. > -- > > Key: HADOOP-14672 > URL: https://issues.apache.org/jira/browse/HADOOP-14672 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Junping Du >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HADOOP-14672.02.patch, HADOOP-14672.03.patch, > HADOOP-14672.04.patch, HADOOP-14672.patch > > > The shaded hadoop-client-minicluster shouldn't include any unshaded > dependencies, but we can see: javax, dom, sax, etc. are all unshaded. > CC [~busbey] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14672) Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, dom, etc.
[ https://issues.apache.org/jira/browse/HADOOP-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099179#comment-16099179 ] Bharat Viswanadham edited comment on HADOOP-14672 at 7/24/17 10:07 PM: --- [~busbey] I tried your patch. below is the output. testJar,testShell classes are from hadoop only (hadoop-mapreduce-client-jobclient) test. So, for this I think these should not be shaded. Found artifact with unexpected contents: '/Users/bviswanadham/workspace/hadoop/hadoop-client-modules/hadoop-client-minicluster/target/hadoop-client-minicluster-3.0.0-beta1-SNAPSHOT.jar' Please check the following and either correct the build or update the allowed list with reasoning. capacity-scheduler.xml krb5.conf log4j.properties about.html testjar/ testjar/ClassWordCount$MapClass.class testjar/ClassWordCount$Reduce.class testjar/ClassWordCount.class testjar/CustomOutputCommitter.class testjar/ExternalIdentityReducer.class testjar/ExternalMapperReducer.class testjar/ExternalWritable.class testjar/JobKillCommitter$CommitterWithFailCleanup.class testjar/JobKillCommitter$CommitterWithFailSetup.class testjar/JobKillCommitter$CommitterWithNoError.class testjar/JobKillCommitter$MapperFail.class testjar/JobKillCommitter$MapperPass.class testjar/JobKillCommitter$MapperPassSleep.class testjar/JobKillCommitter$ReducerFail.class testjar/JobKillCommitter$ReducerPass.class testjar/JobKillCommitter.class testjar/UserNamePermission$UserNameMapper.class testjar/UserNamePermission$UserNameReducer.class testjar/UserNamePermission.class testshell/ testshell/ExternalMapReduce$MapClass.class testshell/ExternalMapReduce$Reduce.class testshell/ExternalMapReduce.class .options jdtCompilerAdapter.jar plugin.properties plugin.xml container-log4j.properties java.policy catalog.cat javaee_5.xsd javaee_6.xsd javaee_web_services_client_1_2.xsd javaee_web_services_client_1_3.xsd jsp_2_1.xsd jsp_2_2.xsd web-app_2_5.xsd web-app_3_0.xsd web-common_3_0.xsd xml.xsd was (Author: bharatviswa): [~busbey] I tried your patch. below is the output. testJar classes are from hadoop only (hadoop-mapreduce-client-jobclient) test. So, for this I think these should not be shaded. Found artifact with unexpected contents: '/Users/bviswanadham/workspace/hadoop/hadoop-client-modules/hadoop-client-minicluster/target/hadoop-client-minicluster-3.0.0-beta1-SNAPSHOT.jar' Please check the following and either correct the build or update the allowed list with reasoning. capacity-scheduler.xml krb5.conf log4j.properties about.html testjar/ testjar/ClassWordCount$MapClass.class testjar/ClassWordCount$Reduce.class testjar/ClassWordCount.class testjar/CustomOutputCommitter.class testjar/ExternalIdentityReducer.class testjar/ExternalMapperReducer.class testjar/ExternalWritable.class testjar/JobKillCommitter$CommitterWithFailCleanup.class testjar/JobKillCommitter$CommitterWithFailSetup.class testjar/JobKillCommitter$CommitterWithNoError.class testjar/JobKillCommitter$MapperFail.class testjar/JobKillCommitter$MapperPass.class testjar/JobKillCommitter$MapperPassSleep.class testjar/JobKillCommitter$ReducerFail.class testjar/JobKillCommitter$ReducerPass.class testjar/JobKillCommitter.class testjar/UserNamePermission$UserNameMapper.class testjar/UserNamePermission$UserNameReducer.class testjar/UserNamePermission.class testshell/ testshell/ExternalMapReduce$MapClass.class testshell/ExternalMapReduce$Reduce.class testshell/ExternalMapReduce.class .options jdtCompilerAdapter.jar plugin.properties plugin.xml container-log4j.properties java.policy catalog.cat javaee_5.xsd javaee_6.xsd javaee_web_services_client_1_2.xsd javaee_web_services_client_1_3.xsd jsp_2_1.xsd jsp_2_2.xsd web-app_2_5.xsd web-app_3_0.xsd web-common_3_0.xsd xml.xsd > Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, > dom, etc. > -- > > Key: HADOOP-14672 > URL: https://issues.apache.org/jira/browse/HADOOP-14672 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Junping Du >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HADOOP-14672.02.patch, HADOOP-14672.03.patch, > HADOOP-14672.04.patch, HADOOP-14672.patch > > > The shaded hadoop-client-minicluster shouldn't include any unshaded > dependencies, but we can see: javax, dom, sax, etc. are all unshaded. > CC [~busbey] -- This message was sent by
[jira] [Updated] (HADOOP-14667) Flexible Visual Studio support
[ https://issues.apache.org/jira/browse/HADOOP-14667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-14667: -- Release Note: This change updates the Microsoft Windows build directions to be more flexible with regards to Visual Studio compiler versions: * CMake v3.1 or higher is now required. * Any version of Visual Studio 2010 Pro or higher may be used. * Example command files to set command paths and upgrade Visual Studio solution files are now located in dev-support/ was: This change updates the Microsoft Windows build directions to be more flexible: * CMake v3.1 or higher is now required. * Any version of Visual Studio 2010 Pro or higher may be used. * Example command files to set command paths and upgrade Visual Studio solution files are now located in dev-support/ > Flexible Visual Studio support > -- > > Key: HADOOP-14667 > URL: https://issues.apache.org/jira/browse/HADOOP-14667 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.0.0-beta1 > Environment: Windows >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer > Attachments: HADOOP-14667.00.patch > > > Is it time to upgrade the Windows native project files to use something more > modern than Visual Studio 2010? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14672) Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, dom, etc.
[ https://issues.apache.org/jira/browse/HADOOP-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099161#comment-16099161 ] Bharat Viswanadham edited comment on HADOOP-14672 at 7/24/17 9:55 PM: -- [~djp][~busbey] org/w3c/dom/HTMLDOMImplementation is from xerces:xercesImpl jar. Updated the code to shade the class(HTMLDOMImplementation) from that jar. Could you please review the changes. was (Author: bharatviswa): [~djp][~busbey] org/w3c/dom/HTMLDOMImplementation is from xerces:xercesImpl jar. Updated the code to shade that class from that jar. Could you please review the changes. > Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, > dom, etc. > -- > > Key: HADOOP-14672 > URL: https://issues.apache.org/jira/browse/HADOOP-14672 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Junping Du >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HADOOP-14672.02.patch, HADOOP-14672.03.patch, > HADOOP-14672.04.patch, HADOOP-14672.patch > > > The shaded hadoop-client-minicluster shouldn't include any unshaded > dependencies, but we can see: javax, dom, sax, etc. are all unshaded. > CC [~busbey] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14672) Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, dom, etc.
[ https://issues.apache.org/jira/browse/HADOOP-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099161#comment-16099161 ] Bharat Viswanadham edited comment on HADOOP-14672 at 7/24/17 9:54 PM: -- [~djp][~busbey] org/w3c/dom/HTMLDOMImplementation is from xerces:xercesImpl jar. Updated the code to shade that class from that jar. Could you please review the changes. was (Author: bharatviswa): [~djp][~busbey] org/w3c/dom/HTMLDOMImplementation is from xerces:xercesImpl jar. Updated the code to shade that jar. Could you please review the changes. > Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, > dom, etc. > -- > > Key: HADOOP-14672 > URL: https://issues.apache.org/jira/browse/HADOOP-14672 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Junping Du >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HADOOP-14672.02.patch, HADOOP-14672.03.patch, > HADOOP-14672.04.patch, HADOOP-14672.patch > > > The shaded hadoop-client-minicluster shouldn't include any unshaded > dependencies, but we can see: javax, dom, sax, etc. are all unshaded. > CC [~busbey] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14667) Flexible Visual Studio support
[ https://issues.apache.org/jira/browse/HADOOP-14667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099170#comment-16099170 ] Allen Wittenauer commented on HADOOP-14667: --- Linking INFRA-14627. My plan is to use this patch to get the ASF Windows build back up and running. > Flexible Visual Studio support > -- > > Key: HADOOP-14667 > URL: https://issues.apache.org/jira/browse/HADOOP-14667 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.0.0-beta1 > Environment: Windows >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer > Attachments: HADOOP-14667.00.patch > > > Is it time to upgrade the Windows native project files to use something more > modern than Visual Studio 2010? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14672) Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, dom, etc.
[ https://issues.apache.org/jira/browse/HADOOP-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099163#comment-16099163 ] Sean Busbey commented on HADOOP-14672: -- [~bharatviswa] would you mind taking a look at the proposed test addition to catch these bits that I have over on HADOOP-14089? I'll trade you for a review here. ;) > Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, > dom, etc. > -- > > Key: HADOOP-14672 > URL: https://issues.apache.org/jira/browse/HADOOP-14672 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Junping Du >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HADOOP-14672.02.patch, HADOOP-14672.03.patch, > HADOOP-14672.04.patch, HADOOP-14672.patch > > > The shaded hadoop-client-minicluster shouldn't include any unshaded > dependencies, but we can see: javax, dom, sax, etc. are all unshaded. > CC [~busbey] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14672) Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, dom, etc.
[ https://issues.apache.org/jira/browse/HADOOP-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099161#comment-16099161 ] Bharat Viswanadham commented on HADOOP-14672: - [~djp][~busbey] org/w3c/dom/HTMLDOMImplementation is from xerces:xercesImpl jar. Updated the code to shade that jar. Could you please review the changes. > Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, > dom, etc. > -- > > Key: HADOOP-14672 > URL: https://issues.apache.org/jira/browse/HADOOP-14672 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Junping Du >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HADOOP-14672.02.patch, HADOOP-14672.03.patch, > HADOOP-14672.04.patch, HADOOP-14672.patch > > > The shaded hadoop-client-minicluster shouldn't include any unshaded > dependencies, but we can see: javax, dom, sax, etc. are all unshaded. > CC [~busbey] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14672) Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, dom, etc.
[ https://issues.apache.org/jira/browse/HADOOP-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-14672: Attachment: HADOOP-14672.04.patch > Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, > dom, etc. > -- > > Key: HADOOP-14672 > URL: https://issues.apache.org/jira/browse/HADOOP-14672 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Junping Du >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HADOOP-14672.02.patch, HADOOP-14672.03.patch, > HADOOP-14672.04.patch, HADOOP-14672.patch > > > The shaded hadoop-client-minicluster shouldn't include any unshaded > dependencies, but we can see: javax, dom, sax, etc. are all unshaded. > CC [~busbey] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14518) Customize User-Agent header sent in HTTP/HTTPS requests by WASB.
[ https://issues.apache.org/jira/browse/HADOOP-14518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099127#comment-16099127 ] Jitendra Nath Pandey commented on HADOOP-14518: --- +1 for the latest patch. The test failures are unrelated to the patch. > Customize User-Agent header sent in HTTP/HTTPS requests by WASB. > > > Key: HADOOP-14518 > URL: https://issues.apache.org/jira/browse/HADOOP-14518 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.0.0-beta1 >Reporter: Georgi Chalakov >Assignee: Georgi Chalakov >Priority: Minor > Attachments: HADOOP-14518-01.patch, HADOOP-14518-01-test.txt, > HADOOP-14518-02.patch, HADOOP-14518-03.patch, HADOOP-14518-04.patch, > HADOOP-14518-05.patch, HADOOP-14518-06.patch > > > WASB passes a User-Agent header to the Azure back-end. Right now, it uses the > default value set by the Azure Client SDK, so Hadoop traffic doesn't appear > any different from general Blob traffic. If we customize the User-Agent > header, then it will enable better troubleshooting and analysis by Azure > service. > The following configuration > > fs.azure.user.agent.prefix > MSFT > > set the user agent to > User-Agent: WASB/3.0.0-alpha4-SNAPSHOT (MSFT) Azure-Storage/4.2.0 > (JavaJRE 1.8.0_131; WindowsServer2012R2 6.3) > Test Results : > Tests run: 703, Failures: 0, Errors: 0, Skipped: 119 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14676) Wrong default value for "fs.du.interval"
[ https://issues.apache.org/jira/browse/HADOOP-14676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen reassigned HADOOP-14676: Assignee: Erik Krogen > Wrong default value for "fs.du.interval" > > > Key: HADOOP-14676 > URL: https://issues.apache.org/jira/browse/HADOOP-14676 > Project: Hadoop Common > Issue Type: Bug > Components: common, conf, fs >Affects Versions: 2.6.1 >Reporter: Konstantin Shvachko >Assignee: Erik Krogen > > According to {{core-default.xml}} the default value of {{fs.du.interval = 60 > sec}}. But the implementation of {{DF}} substitutes 3 sec instead. The > problem is that {{DF}} uses outdated constant {{DF.DF_INTERVAL_DEFAULT}} > instead of the correct one > {{CommonConfigurationKeysPublic.FS_DU_INTERVAL_DEFAULT}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14681) Remove MockitoMaker class
[ https://issues.apache.org/jira/browse/HADOOP-14681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099019#comment-16099019 ] Hadoop QA commented on HADOOP-14681: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 19s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 37s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 3s{color} | {color:green} root: The patch generated 0 new + 164 unchanged - 11 fixed = 164 total (was 175) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 25s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m 40s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 36s{color} | {color:green} hadoop-mapreduce-client-shuffle in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 43s{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 28s{color} | {color:red} hadoop-mapreduce-client-jobclient in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 50s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}267m 11s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ha.TestZKFailoverController | | | hadoop.security.TestKDiag | | | hadoop.yarn.server.resourcemanager.TestRMRestart | | | hadoop.mapreduce.security.TestUmbilicalProtocolWithJobToken | | Timed out junit tests | org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-14681 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12878620/HADOOP-14681.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 442d5a0fa6b8 3.13.0-117-generic
[jira] [Commented] (HADOOP-14599) RPC queue time metrics omit timed out clients
[ https://issues.apache.org/jira/browse/HADOOP-14599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098898#comment-16098898 ] Hadoop QA commented on HADOOP-14599: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 37s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 57s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 39s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 11s{color} | {color:orange} root: The patch generated 5 new + 711 unchanged - 1 fixed = 716 total (was 712) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 52s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 43s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 43s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}164m 58s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ha.TestZKFailoverController | | | hadoop.security.TestKDiag | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-14599 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12878636/HADOOP-14599-004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux f354e5b855f2 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 770cc46 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-HADOOP-Build/12846/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html | | checkstyle |
[jira] [Updated] (HADOOP-14599) RPC queue time metrics omit timed out clients
[ https://issues.apache.org/jira/browse/HADOOP-14599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashwin Ramesh updated HADOOP-14599: --- Attachment: HADOOP-14599-004.patch > RPC queue time metrics omit timed out clients > - > > Key: HADOOP-14599 > URL: https://issues.apache.org/jira/browse/HADOOP-14599 > Project: Hadoop Common > Issue Type: Bug > Components: metrics, rpc-server >Affects Versions: 2.7.0 >Reporter: Ashwin Ramesh >Assignee: Ashwin Ramesh > Attachments: HADOOP-14599.001.patch, HADOOP-14599-002.patch, > HADOOP-14599-003.patch, HADOOP-14599-004.patch > > > RPC average queue time metrics will now update even if the client who made > the call timed out while the call was in the call queue. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14455) ViewFileSystem#rename should support be supported within same nameservice with different mountpoints
[ https://issues.apache.org/jira/browse/HADOOP-14455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098513#comment-16098513 ] Hadoop QA commented on HADOOP-14455: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 26s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 48s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 15s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 59s{color} | {color:orange} root: The patch generated 1 new + 296 unchanged - 3 fixed = 297 total (was 299) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 7s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 41s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}155m 1s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.fs.shell.TestCopyFromLocal | | | hadoop.security.TestKDiag | | | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-14455 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12878608/HADOOP-14455-007.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 1c63fe528ba6 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 770cc46 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | |
[jira] [Commented] (HADOOP-14245) Use Mockito.when instead of Mockito.stub
[ https://issues.apache.org/jira/browse/HADOOP-14245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098427#comment-16098427 ] Andras Bokor commented on HADOOP-14245: --- Filed HADOOP-14681. > Use Mockito.when instead of Mockito.stub > > > Key: HADOOP-14245 > URL: https://issues.apache.org/jira/browse/HADOOP-14245 > Project: Hadoop Common > Issue Type: Test > Components: test >Reporter: Akira Ajisaka >Assignee: Xiaobing Zhou >Priority: Minor > > Mockito.stub was removed in Mockito 2. Mockito.when should be used instead. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14681) Remove MockitoMaker class
[ https://issues.apache.org/jira/browse/HADOOP-14681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor updated HADOOP-14681: -- Status: Patch Available (was: Open) > Remove MockitoMaker class > - > > Key: HADOOP-14681 > URL: https://issues.apache.org/jira/browse/HADOOP-14681 > Project: Hadoop Common > Issue Type: Bug > Components: test >Reporter: Andras Bokor >Assignee: Andras Bokor > Attachments: HADOOP-14681.01.patch > > > I would remove MockitoMaker class and use the standard way to mock objects. > For developers it's harder to read and misleading since it's using the > deprecated syntax. > In addition, it is only used at only some places so we using Mockito on a > not-unified way. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14681) Remove MockitoMaker class
[ https://issues.apache.org/jira/browse/HADOOP-14681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor updated HADOOP-14681: -- Attachment: HADOOP-14681.01.patch Attaching 1st patch. > Remove MockitoMaker class > - > > Key: HADOOP-14681 > URL: https://issues.apache.org/jira/browse/HADOOP-14681 > Project: Hadoop Common > Issue Type: Bug > Components: test >Reporter: Andras Bokor >Assignee: Andras Bokor > Attachments: HADOOP-14681.01.patch > > > I would remove MockitoMaker class and use the standard way to mock objects. > For developers it's harder to read and misleading since it's using the > deprecated syntax. > In addition, it is only used at only some places so we using Mockito on a > not-unified way. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13845) s3a to instrument duration of HTTP calls
[ https://issues.apache.org/jira/browse/HADOOP-13845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098416#comment-16098416 ] Yonger commented on HADOOP-13845: - Does this make sense? {code:java} Duration duration = new Duration(); ObjectListing listing= s3.listObjects(request); duration.finished(); durationStats.add(method.getName()+" " + reason, duration, success); return listing; {code} > s3a to instrument duration of HTTP calls > > > Key: HADOOP-13845 > URL: https://issues.apache.org/jira/browse/HADOOP-13845 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Priority: Minor > > HADOOP-13844 proposes pulling out the swift duration classes for reuse; this > patch proposes instrumenting s3a with it. > One interesting question: what to do with the values. For now, they could > just be printed, but it might be interesting to include in FS stats collected > at the end of a run. However, those are all assumed to be simple counters > where merging is a matter of addition. These are more metrics -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14681) Remove MockitoMaker class
Andras Bokor created HADOOP-14681: - Summary: Remove MockitoMaker class Key: HADOOP-14681 URL: https://issues.apache.org/jira/browse/HADOOP-14681 Project: Hadoop Common Issue Type: Bug Components: test Reporter: Andras Bokor Assignee: Andras Bokor I would remove MockitoMaker class and use the standard way to mock objects. For developers it's harder to read and misleading since it's using the deprecated syntax. In addition, it is only used at only some places so we using Mockito on a not-unified way. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14455) ViewFileSystem#rename should support be supported within same nameservice with different mountpoints
[ https://issues.apache.org/jira/browse/HADOOP-14455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-14455: -- Attachment: HADOOP-14455-007.patch Uploading the patch to address the minor nits. > ViewFileSystem#rename should support be supported within same nameservice > with different mountpoints > > > Key: HADOOP-14455 > URL: https://issues.apache.org/jira/browse/HADOOP-14455 > Project: Hadoop Common > Issue Type: Improvement > Components: viewfs >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > Attachments: HADOOP-14455-002.patch, HADOOP-14455-003.patch, > HADOOP-14455-004.patch, HADOOP-14455-005.patch, HADOOP-14455-006.patch, > HADOOP-14455-007.patch, HADOOP-14455.patch > > > *Scenario:* > || Mount Point || NameService|| Value|| > |/tmp|hacluster|/tmp| > |/user|hacluster|/user| > Move file from {{/tmp}} to {{/user}} > It will fail by throwing the following error > {noformat} > Caused by: java.io.IOException: Renames across Mount points not supported > at > org.apache.hadoop.fs.viewfs.ViewFileSystem.rename(ViewFileSystem.java:500) > at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2692) > ... 22 more > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14455) ViewFileSystem#rename should support be supported within same nameservice with different mountpoints
[ https://issues.apache.org/jira/browse/HADOOP-14455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098042#comment-16098042 ] Vinayakumar B commented on HADOOP-14455: Patch looks almost good. Just 2 nits, You can commit once addressed. 1. Even though reaching this line is impossible now, it should be {{IllegalArgumentException}} not {{IllegalStateException}}. {code}+default: + throw new IllegalStateException("Unexpected rename strategy"); } {code} 2. In below code, you can remove the comment line. Its no longer relavant here {code}-resSrc.targetFileSystem.renameInternal(resSrc.remainingPath, - resDst.remainingPath, overwrite); +//Alternate 1: renames within same file system +URI srcUri = resSrc.targetFileSystem.getUri(); +URI dstUri = resDst.targetFileSystem.getUri(); +ViewFileSystem.verifyRenameStrategy(srcUri, dstUri, +resSrc.targetFileSystem == resDst.targetFileSystem, renameStrategy);{code} Once addressed, you can commit, unless others have different opinion > ViewFileSystem#rename should support be supported within same nameservice > with different mountpoints > > > Key: HADOOP-14455 > URL: https://issues.apache.org/jira/browse/HADOOP-14455 > Project: Hadoop Common > Issue Type: Improvement > Components: viewfs >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > Attachments: HADOOP-14455-002.patch, HADOOP-14455-003.patch, > HADOOP-14455-004.patch, HADOOP-14455-005.patch, HADOOP-14455-006.patch, > HADOOP-14455.patch > > > *Scenario:* > || Mount Point || NameService|| Value|| > |/tmp|hacluster|/tmp| > |/user|hacluster|/user| > Move file from {{/tmp}} to {{/user}} > It will fail by throwing the following error > {noformat} > Caused by: java.io.IOException: Renames across Mount points not supported > at > org.apache.hadoop.fs.viewfs.ViewFileSystem.rename(ViewFileSystem.java:500) > at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2692) > ... 22 more > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org