[jira] [Comment Edited] (HADOOP-18090) Exclude com/jcraft/jsch classes from being shaded/relocated

2022-01-25 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17481853#comment-17481853
 ] 

Sean Busbey edited comment on HADOOP-18090 at 1/25/22, 2:36 PM:


I'm not sure what kind of validation you're looking for. Generally, just about 
anyone can submit things as a bug against hadoop (as you have done here). I 
believe you have filed it against the correct component.

If you mean "can someone fix this", that's all done essentially by folks 
volunteering. If no one picks up this issue you might try discussing it on the 
dev list. It will get much more attention if you attempt to fix things.

To me, this looks like a mismatch in expectations around SFTPFileSystem and our 
classpath isolated client libraries. The two questions that I would work out are

#  _should_ SFTPFileSystem be included in the client libraries or is it 
misplaced? This feels akin to the issue we had with s3a in HADOOP-16080 and 
ideally solved akin to HADOOP-15387
# Presuming this should stay where it is, fixing it means changing the set of 
included relocated classes. Why isn't this class already present if it's 
reachable from a class we include? How extensive a change is correcting that.


was (Author: busbey):
I'm not sure what kind of validation you're looking for. Generally, just about 
anyone can submit things as a bug against hadoop (as you have done here). I 
believe you have filed it against the correct component.

If you mean "can someone fix this", that's all done essentially by folks 
volunteering. If no one picks up this issue you might try discussing it on the 
dev list. It will get much more attention if you attempt to fix things.

To me, this looks like a mismatch in expectations around SFTPFileSystem and our 
classpath isolated client libraries. The two questions that I would work out are

1. _should_ SFTPFileSystem be included in the client libraries or is it 
misplaced? This feels akin to the issue we had with s3a in HADOOP-16080 and 
ideally solved akin to HADOOP-15387
1. Presuming this should stay where it is, fixing it means changing the set of 
included relocated classes. Why isn't this class already present if it's 
reachable from a class we include? How extensive a change is correcting that.

> Exclude com/jcraft/jsch classes from being shaded/relocated
> ---
>
> Key: HADOOP-18090
> URL: https://issues.apache.org/jira/browse/HADOOP-18090
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.1
>Reporter: mkv
>Priority: Major
>
> Spark 3.2.0 transitively introduces hadoop-client-api and 
> hadoop-client-runtime dependencies.
> When we create a SFTPFileSystem instance 
> (org.apache.hadoop.fs.sftp.SFTPFileSystem) it tries to load the relocated 
> classes from _com.jcraft.jsch_ package.
> The filesystem instance creation fails with error:
> {code:java}
> java.lang.ClassNotFoundException: 
> org.apache.hadoop.shaded.com.jcraft.jsch.SftpException
>     at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>     at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:357) {code}
> Excluding client from transitive load of spark and directly using 
> hadoop-common/hadoop-client is the way its working for us.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18090) Exclude com/jcraft/jsch classes from being shaded/relocated

2022-01-25 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17481853#comment-17481853
 ] 

Sean Busbey commented on HADOOP-18090:
--

I'm not sure what kind of validation you're looking for. Generally, just about 
anyone can submit things as a bug against hadoop (as you have done here). I 
believe you have filed it against the correct component.

If you mean "can someone fix this", that's all done essentially by folks 
volunteering. If no one picks up this issue you might try discussing it on the 
dev list. It will get much more attention if you attempt to fix things.

To me, this looks like a mismatch in expectations around SFTPFileSystem and our 
classpath isolated client libraries. The two questions that I would work out are

1. _should_ SFTPFileSystem be included in the client libraries or is it 
misplaced? This feels akin to the issue we had with s3a in HADOOP-16080 and 
ideally solved akin to HADOOP-15387
1. Presuming this should stay where it is, fixing it means changing the set of 
included relocated classes. Why isn't this class already present if it's 
reachable from a class we include? How extensive a change is correcting that.

> Exclude com/jcraft/jsch classes from being shaded/relocated
> ---
>
> Key: HADOOP-18090
> URL: https://issues.apache.org/jira/browse/HADOOP-18090
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.1
>Reporter: mkv
>Priority: Major
>
> Spark 3.2.0 transitively introduces hadoop-client-api and 
> hadoop-client-runtime dependencies.
> When we create a SFTPFileSystem instance 
> (org.apache.hadoop.fs.sftp.SFTPFileSystem) it tries to load the relocated 
> classes from _com.jcraft.jsch_ package.
> The filesystem instance creation fails with error:
> {code:java}
> java.lang.ClassNotFoundException: 
> org.apache.hadoop.shaded.com.jcraft.jsch.SftpException
>     at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>     at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:357) {code}
> Excluding client from transitive load of spark and directly using 
> hadoop-common/hadoop-client is the way its working for us.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13922) Some modules have dependencies on hadoop-client jar removed by HADOOP-11804

2022-01-20 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17479668#comment-17479668
 ] 

Sean Busbey commented on HADOOP-13922:
--

[~manojkumarvohra9] that sounds like it is probably an issue, but the specifics 
will matter. 

Very few folks will notice activity on this long-closed issue. If you're not 
sure if things are a problem, I recommend bringing it to the mailing list. If 
you are certain there's a gap then file a new jira describing the specifics of 
what you're doing and how the behavior is different from what ought to happen.

> Some modules have dependencies on hadoop-client jar removed by HADOOP-11804
> ---
>
> Key: HADOOP-13922
> URL: https://issues.apache.org/jira/browse/HADOOP-13922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Joe Pallas
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13922.1.patch
>
>
> As discussed in [HADOOP-11804 comment 
> 15758048|https://issues.apache.org/jira/browse/HADOOP-11804?focusedCommentId=15758048=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15758048]
>  and following comments, there are still dependencies on the now-removed 
> hadoop-client jar.  The current code builds only because an obsolete snapshot 
> of the jar is found on the repository server.  Changing the project version 
> to something new exposes the problem.
> While the build currently dies at hadoop-tools/hadoop-sls, I'm seeing issues 
> with some Hadoop Client modules, too.
> I'm filing a new bug because I can't reopen HADOOP-11804.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17946) Update commons-lang to latest 3.x

2021-09-29 Thread Sean Busbey (Jira)
Sean Busbey created HADOOP-17946:


 Summary: Update commons-lang to latest 3.x
 Key: HADOOP-17946
 URL: https://issues.apache.org/jira/browse/HADOOP-17946
 Project: Hadoop Common
  Issue Type: Task
Reporter: Sean Busbey


our commons-lang3 dependency is currently 3.7, which is nearly 4 years old. 
latest right now is 3.12 and there are at least some fixes that would make us 
more robust on JDKs newer than openjdk8 (e.g. LANG-1384. [release notes 
indicate 3.9 is the first to support 
jdk11|https://commons.apache.org/proper/commons-lang/changes-report.html]).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17813) Allow line length more than 80 characters

2021-07-22 Thread Sean Busbey (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-17813:
-
Fix Version/s: 3.3.2
   3.2.3
   2.10.2
   3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Allow line length more than 80 characters
> -
>
> Key: HADOOP-17813
> URL: https://issues.apache.org/jira/browse/HADOOP-17813
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 2.10.2, 3.2.3, 3.3.2
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Update the checkstyle rule to allow for 100 or 120 characters.
> Discussion thread: 
> [https://lists.apache.org/thread.html/r69c363fb365d4cfdec44433e7f6ec7d7eb3505067c2fcb793765068f%40%3Ccommon-dev.hadoop.apache.org%3E]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17800) CLONE - Uber-JIRA: Hadoop should support IPv6

2021-07-15 Thread Sean Busbey (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HADOOP-17800:


Assignee: Brahma Reddy Battula  (was: Nate Edel)

> CLONE - Uber-JIRA: Hadoop should support IPv6
> -
>
> Key: HADOOP-17800
> URL: https://issues.apache.org/jira/browse/HADOOP-17800
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
>  Labels: ipv6
>
> Hadoop currently treats IPv6 as unsupported. Track related smaller issues to 
> support IPv6.
> (Current case here is mainly HBase on HDFS, so any suggestions about other 
> test cases/workload are really appreciated.)
> Please see [Here | 
> https://issues.apache.org/jira/browse/HADOOP-11890?focusedCommentId=17379845=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17379845]
>  for more details.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11890) Uber-JIRA: Hadoop should support IPv6

2021-07-15 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-11890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17381541#comment-17381541
 ] 

Sean Busbey commented on HADOOP-11890:
--

no worries. let me make sure the clone is assigned to you so you can edit 
things.

> Uber-JIRA: Hadoop should support IPv6
> -
>
> Key: HADOOP-11890
> URL: https://issues.apache.org/jira/browse/HADOOP-11890
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Reporter: Nate Edel
>Assignee: Nate Edel
>Priority: Major
>  Labels: ipv6
> Attachments: hadoop_2.7.3_ipv6_commits.txt
>
>
> Hadoop currently treats IPv6 as unsupported.  Track related smaller issues to 
> support IPv6.
> (Current case here is mainly HBase on HDFS, so any suggestions about other 
> test cases/workload are really appreciated.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11890) Uber-JIRA: Hadoop should support IPv6

2021-07-14 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-11890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17380839#comment-17380839
 ] 

Sean Busbey commented on HADOOP-11890:
--

in the future it'll be easier for others to follow along if you

a) favor removing / renaming the old feature branch so you can use the original 
jira for the feature branch
b) expressly make a jira that it is for feature branch tracking rather than 
clone the original jira.

> Uber-JIRA: Hadoop should support IPv6
> -
>
> Key: HADOOP-11890
> URL: https://issues.apache.org/jira/browse/HADOOP-11890
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Reporter: Nate Edel
>Assignee: Nate Edel
>Priority: Major
>  Labels: ipv6
> Attachments: hadoop_2.7.3_ipv6_commits.txt
>
>
> Hadoop currently treats IPv6 as unsupported.  Track related smaller issues to 
> support IPv6.
> (Current case here is mainly HBase on HDFS, so any suggestions about other 
> test cases/workload are really appreciated.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11890) Uber-JIRA: Hadoop should support IPv6

2021-07-14 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-11890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17380816#comment-17380816
 ] 

Sean Busbey commented on HADOOP-11890:
--

Could the work happen against this jira instead of a clone of it?

> Uber-JIRA: Hadoop should support IPv6
> -
>
> Key: HADOOP-11890
> URL: https://issues.apache.org/jira/browse/HADOOP-11890
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Reporter: Nate Edel
>Assignee: Nate Edel
>Priority: Major
>  Labels: ipv6
> Attachments: hadoop_2.7.3_ipv6_commits.txt
>
>
> Hadoop currently treats IPv6 as unsupported.  Track related smaller issues to 
> support IPv6.
> (Current case here is mainly HBase on HDFS, so any suggestions about other 
> test cases/workload are really appreciated.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15566) Support OpenTelemetry

2021-05-26 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17352209#comment-17352209
 ] 

Sean Busbey commented on HADOOP-15566:
--

htrace removal already happened in HADOOP-17424. that issue currently is 
resolved with a fix included in the trunk branch, which currently means it'll 
be in a 3.4.0 release.

> Support OpenTelemetry
> -
>
> Key: HADOOP-15566
> URL: https://issues.apache.org/jira/browse/HADOOP-15566
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics, tracing
>Affects Versions: 3.1.0
>Reporter: Todd Lipcon
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available, security
> Attachments: HADOOP-15566.000.WIP.patch, OpenTelemetry Support Scope 
> Doc v2.pdf, OpenTracing Support Scope Doc.pdf, Screen Shot 2018-06-29 at 
> 11.59.16 AM.png, ss-trace-s3a.png
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> The HTrace incubator project has voted to retire itself and won't be making 
> further releases. The Hadoop project currently has various hooks with HTrace. 
> It seems in some cases (eg HDFS-13702) these hooks have had measurable 
> performance overhead. Given these two factors, I think we should consider 
> removing the HTrace integration. If there is someone willing to do the work, 
> replacing it with OpenTracing might be a better choice since there is an 
> active community.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17115) Replace Guava Sets usage by Hadoop's own Sets in hadoop-common and hadoop-tools

2021-05-20 Thread Sean Busbey (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey resolved HADOOP-17115.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

merged to trunk, which puts in into the release train for Hadoop 3.4.

It's hard to tell which branches are active release trains, so I'll leave it 
here for now.

If someone feels strongly about this being on earlier branches let me know. 
Otherwise I'll reopen and back port once I figure out where we're still cutting 
releases from.

> Replace Guava Sets usage by Hadoop's own Sets in hadoop-common and 
> hadoop-tools
> ---
>
> Key: HADOOP-17115
> URL: https://issues.apache.org/jira/browse/HADOOP-17115
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 8h 10m
>  Remaining Estimate: 0h
>
> Unjustified usage of Guava API to initialize a {{HashSet}}. This should be 
> replaced by Java APIs.
> {code:java}
> Targets
> Occurrences of 'Sets.newHashSet' in project
> Found Occurrences  (223 usages found)
> org.apache.hadoop.crypto.key  (2 usages found)
> TestValueQueue.java  (2 usages found)
> testWarmUp()  (2 usages found)
> 106 Assert.assertEquals(Sets.newHashSet("k1", "k2", "k3"),
> 107 Sets.newHashSet(fillInfos[0].key,
> org.apache.hadoop.crypto.key.kms  (6 usages found)
> TestLoadBalancingKMSClientProvider.java  (6 usages found)
> testCreation()  (6 usages found)
> 86 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/;),
> 87 Sets.newHashSet(providers[0].getKMSUrl()));
> 95 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/;,
> 98 Sets.newHashSet(providers[0].getKMSUrl(),
> 108 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/;,
> 111 Sets.newHashSet(providers[0].getKMSUrl(),
> org.apache.hadoop.crypto.key.kms.server  (1 usage found)
> KMSAudit.java  (1 usage found)
> 59 static final Set AGGREGATE_OPS_WHITELIST = 
> Sets.newHashSet(
> org.apache.hadoop.fs.s3a  (1 usage found)
> TestS3AAWSCredentialsProvider.java  (1 usage found)
> testFallbackToDefaults()  (1 usage found)
> 183 Sets.newHashSet());
> org.apache.hadoop.fs.s3a.auth  (1 usage found)
> AssumedRoleCredentialProvider.java  (1 usage found)
> AssumedRoleCredentialProvider(URI, Configuration)  (1 usage found)
> 113 Sets.newHashSet(this.getClass()));
> org.apache.hadoop.fs.s3a.commit.integration  (1 usage found)
> ITestS3ACommitterMRJob.java  (1 usage found)
> test_200_execute()  (1 usage found)
> 232 Set expectedKeys = Sets.newHashSet();
> org.apache.hadoop.fs.s3a.commit.staging  (5 usages found)
> TestStagingCommitter.java  (3 usages found)
> testSingleTaskMultiFileCommit()  (1 usage found)
> 341 Set keys = Sets.newHashSet();
> runTasks(JobContext, int, int)  (1 usage found)
> 603 Set uploads = Sets.newHashSet();
> commitTask(StagingCommitter, TaskAttemptContext, int)  (1 usage 
> found)
> 640 Set files = Sets.newHashSet();
> TestStagingPartitionedTaskCommit.java  (2 usages found)
> verifyFilesCreated(PartitionedStagingCommitter)  (1 usage found)
> 148 Set files = Sets.newHashSet();
> buildExpectedList(StagingCommitter)  (1 usage found)
> 188 Set expected = Sets.newHashSet();
> org.apache.hadoop.hdfs  (5 usages found)
> DFSUtil.java  (2 usages found)
> getNNServiceRpcAddressesForCluster(Configuration)  (1 usage found)
> 615 Set availableNameServices = Sets.newHashSet(conf
> getNNLifelineRpcAddressesForCluster(Configuration)  (1 usage 
> found)
> 660 Set availableNameServices = Sets.newHashSet(conf
> MiniDFSCluster.java  (1 usage found)
> 597 private Set fileSystems = Sets.newHashSet();
> TestDFSUtil.java  (2 usages found)
> testGetNNServiceRpcAddressesForNsIds()  (2 usages found)
> 1046 assertEquals(Sets.newHashSet("nn1"), internal);
> 1049 assertEquals(Sets.newHashSet("nn1", "nn2"), all);
> org.apache.hadoop.hdfs.net  (5 usages found)
> TestDFSNetworkTopology.java  (5 usages found)
> testChooseRandomWithStorageType()  (4 usages found)
> 277 Sets.newHashSet("host2", 

[jira] [Commented] (HADOOP-17115) Replace Guava Sets usage by Hadoop's own Sets

2021-05-16 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17345689#comment-17345689
 ] 

Sean Busbey commented on HADOOP-17115:
--

Also there is no license issue with copying code out of guava so long as we 
attribute it properly. We have existing classes where this was done wholesale, 
i e LimitInputStream.

> Replace Guava Sets usage by Hadoop's own Sets
> -
>
> Key: HADOOP-17115
> URL: https://issues.apache.org/jira/browse/HADOOP-17115
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 10m
>  Remaining Estimate: 0h
>
> Unjustified usage of Guava API to initialize a {{HashSet}}. This should be 
> replaced by Java APIs.
> {code:java}
> Targets
> Occurrences of 'Sets.newHashSet' in project
> Found Occurrences  (223 usages found)
> org.apache.hadoop.crypto.key  (2 usages found)
> TestValueQueue.java  (2 usages found)
> testWarmUp()  (2 usages found)
> 106 Assert.assertEquals(Sets.newHashSet("k1", "k2", "k3"),
> 107 Sets.newHashSet(fillInfos[0].key,
> org.apache.hadoop.crypto.key.kms  (6 usages found)
> TestLoadBalancingKMSClientProvider.java  (6 usages found)
> testCreation()  (6 usages found)
> 86 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/;),
> 87 Sets.newHashSet(providers[0].getKMSUrl()));
> 95 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/;,
> 98 Sets.newHashSet(providers[0].getKMSUrl(),
> 108 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/;,
> 111 Sets.newHashSet(providers[0].getKMSUrl(),
> org.apache.hadoop.crypto.key.kms.server  (1 usage found)
> KMSAudit.java  (1 usage found)
> 59 static final Set AGGREGATE_OPS_WHITELIST = 
> Sets.newHashSet(
> org.apache.hadoop.fs.s3a  (1 usage found)
> TestS3AAWSCredentialsProvider.java  (1 usage found)
> testFallbackToDefaults()  (1 usage found)
> 183 Sets.newHashSet());
> org.apache.hadoop.fs.s3a.auth  (1 usage found)
> AssumedRoleCredentialProvider.java  (1 usage found)
> AssumedRoleCredentialProvider(URI, Configuration)  (1 usage found)
> 113 Sets.newHashSet(this.getClass()));
> org.apache.hadoop.fs.s3a.commit.integration  (1 usage found)
> ITestS3ACommitterMRJob.java  (1 usage found)
> test_200_execute()  (1 usage found)
> 232 Set expectedKeys = Sets.newHashSet();
> org.apache.hadoop.fs.s3a.commit.staging  (5 usages found)
> TestStagingCommitter.java  (3 usages found)
> testSingleTaskMultiFileCommit()  (1 usage found)
> 341 Set keys = Sets.newHashSet();
> runTasks(JobContext, int, int)  (1 usage found)
> 603 Set uploads = Sets.newHashSet();
> commitTask(StagingCommitter, TaskAttemptContext, int)  (1 usage 
> found)
> 640 Set files = Sets.newHashSet();
> TestStagingPartitionedTaskCommit.java  (2 usages found)
> verifyFilesCreated(PartitionedStagingCommitter)  (1 usage found)
> 148 Set files = Sets.newHashSet();
> buildExpectedList(StagingCommitter)  (1 usage found)
> 188 Set expected = Sets.newHashSet();
> org.apache.hadoop.hdfs  (5 usages found)
> DFSUtil.java  (2 usages found)
> getNNServiceRpcAddressesForCluster(Configuration)  (1 usage found)
> 615 Set availableNameServices = Sets.newHashSet(conf
> getNNLifelineRpcAddressesForCluster(Configuration)  (1 usage 
> found)
> 660 Set availableNameServices = Sets.newHashSet(conf
> MiniDFSCluster.java  (1 usage found)
> 597 private Set fileSystems = Sets.newHashSet();
> TestDFSUtil.java  (2 usages found)
> testGetNNServiceRpcAddressesForNsIds()  (2 usages found)
> 1046 assertEquals(Sets.newHashSet("nn1"), internal);
> 1049 assertEquals(Sets.newHashSet("nn1", "nn2"), all);
> org.apache.hadoop.hdfs.net  (5 usages found)
> TestDFSNetworkTopology.java  (5 usages found)
> testChooseRandomWithStorageType()  (4 usages found)
> 277 Sets.newHashSet("host2", "host4", "host5", "host6");
> 278 Set archiveUnderL1 = Sets.newHashSet("host1", 
> "host3");
> 279 Set ramdiskUnderL1 = Sets.newHashSet("host7");
> 280 Set ssdUnderL1 = Sets.newHashSet("host8");
> 

[jira] [Commented] (HADOOP-17098) Reduce Guava dependency in Hadoop source code

2020-06-29 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17148016#comment-17148016
 ] 

Sean Busbey commented on HADOOP-17098:
--

sounds like a good goal and a well given scope.

> Reduce Guava dependency in Hadoop source code
> -
>
> Key: HADOOP-17098
> URL: https://issues.apache.org/jira/browse/HADOOP-17098
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> Relying on Guava implementation in Hadoop has been painful due to 
> compatibility and vulnerability issues.
>  Guava updates tend to break/deprecate APIs. This made It hard to maintain 
> backward compatibility within hadoop versions and clients/downstreams.
> With 3.x uses java8+, the java 8 features should preferred to Guava, reducing 
> the footprint, and giving stability to source code.
> This jira should serve as an umbrella toward an incremental effort to reduce 
> the usage of Guava in the source code and to create subtasks to replace Guava 
> classes with Java features.
> Furthermore, it will be good to add a rule in the pre-commit build to warn 
> against introducing a new Guava usage in certain modules.
> Any one willing to take part in this code refactoring has to:
>  # Focus on one module at a time in order to reduce the conflicts and the 
> size of the patch. This will significantly help the reviewers.
>  # Run all the unit tests related to the module being affected by the change. 
> It is critical to verify that any change will not break the unit tests, or 
> cause a stable test case to become flaky.
>  
> A list of sub tasks replacing Guava APIs with java8 features:
> {code:java}
> com.google.common.io.BaseEncoding#base64()java.util.Base64
> com.google.common.io.BaseEncoding#base64Url() java.util.Base64
> com.google.common.base.Joiner.on()
> java.lang.String#join() or 
>   
>java.util.stream.Collectors#joining()
> com.google.common.base.Optional#of()  java.util.Optional#of()
> com.google.common.base.Optional#absent()  
> java.util.Optional#empty()
> com.google.common.base.Optional#fromNullable()
> java.util.Optional#ofNullable()
> com.google.common.base.Optional   
> java.util.Optional
> com.google.common.base.Predicate  
> java.util.function.Predicate
> com.google.common.base.Function   
> java.util.function.Function
> com.google.common.base.Supplier   
> java.util.function.Supplier
> {code}
>  
> I also vote for the replacement of {{Precondition}} with either a wrapper, or 
> Apache commons lang.
> I believe you guys have dealt with Guava compatibilities in the past and 
> probably have better insights. Any thoughts? [~weichiu], [~gabor.bota], 
> [~ste...@apache.org], [~ayushtkn], [~busbey], [~jeagles], [~kihwal]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16219) [JDK8] Set minimum version of Hadoop 2 to JDK 8

2020-06-23 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143036#comment-17143036
 ] 

Sean Busbey commented on HADOOP-16219:
--

So long as we maintain jdk7 compatibility I think it's fine.

It's still going to break a bunch of downstream folks so we need to release 
note it.

> [JDK8] Set minimum version of Hadoop 2 to JDK 8
> ---
>
> Key: HADOOP-16219
> URL: https://issues.apache.org/jira/browse/HADOOP-16219
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.10.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16219-branch-2-001.patch
>
>
> Java 7 is long EOL; having branch-2 require it is simply making the release 
> process a pain (we aren't building, testing, or releasing on java 7 JVMs any 
> more, are we?). 
> Staying on java 7 complicates backporting, JAR updates for CVEs (hello 
> Guava!)  are becoming impossible.
> Proposed: increment javac.version = 1.8



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16219) [JDK8] Set minimum version of Hadoop 2 to JDK 8

2020-06-22 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142322#comment-17142322
 ] 

Sean Busbey commented on HADOOP-16219:
--

Did a discussion happen on common-dev?

I did not see an answer to "eol from who". Red Hat is just one vendor. The 
openjdk7 updates project is still active. Oracle still sells extended support 
for jdk7 through July 2022. Azul systems sells jdk7 support through July 2023.

> [JDK8] Set minimum version of Hadoop 2 to JDK 8
> ---
>
> Key: HADOOP-16219
> URL: https://issues.apache.org/jira/browse/HADOOP-16219
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.10.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16219-branch-2-001.patch
>
>
> Java 7 is long EOL; having branch-2 require it is simply making the release 
> process a pain (we aren't building, testing, or releasing on java 7 JVMs any 
> more, are we?). 
> Staying on java 7 complicates backporting, JAR updates for CVEs (hello 
> Guava!)  are becoming impossible.
> Proposed: increment javac.version = 1.8



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16822) Provide source artifacts for hadoop-client-api

2020-03-11 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17057234#comment-17057234
 ] 

Sean Busbey commented on HADOOP-16822:
--

these should only end up in the nexus repo right? If that's the case I think 
adding source jars would be nice if it works.

> Provide source artifacts for hadoop-client-api
> --
>
> Key: HADOOP-16822
> URL: https://issues.apache.org/jira/browse/HADOOP-16822
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Karel Kolman
>Assignee: Karel Kolman
>Priority: Major
> Attachments: HADOOP-16822-hadoop-client-api-source-jar.patch
>
>
> h5. Improvement request
> The third-party libraries shading hadoop-client-api (& hadoop-client-runtime) 
> artifacts are super useful.
>  
> Having uber source jar for hadoop-client-api (maybe even 
> hadoop-client-runtime) would be great for downstream development & debugging 
> purposes.
> Are there any obstacles or objections against providing fat jar with all the 
> hadoop client api as well ?
> h5. Dev links
> - *maven-shaded-plugin* and its *shadeSourcesContent* attribute
> - 
> https://maven.apache.org/plugins/maven-shade-plugin/shade-mojo.html#shadeSourcesContent



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16061) Update Apache Yetus to 0.10.0

2019-09-19 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933947#comment-16933947
 ] 

Sean Busbey commented on HADOOP-16061:
--

also the smart-apply-patch wrapper in dev-support on my OSX box works correctly.

> Update Apache Yetus to 0.10.0
> -
>
> Key: HADOOP-16061
> URL: https://issues.apache.org/jira/browse/HADOOP-16061
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0, 3.2.1
>
>
> Yetus 0.10.0 is out. Let's upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16061) Update Apache Yetus to 0.10.0

2019-09-19 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933941#comment-16933941
 ] 

Sean Busbey commented on HADOOP-16061:
--

as an alternative you can also use Homebrew to install yetus on OS X.

> Update Apache Yetus to 0.10.0
> -
>
> Key: HADOOP-16061
> URL: https://issues.apache.org/jira/browse/HADOOP-16061
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0, 3.2.1
>
>
> Yetus 0.10.0 is out. Let's upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15998) Ensure jar validation works on Windows.

2019-08-29 Thread Sean Busbey (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-15998:
-
Summary: Ensure jar validation works on Windows.  (was: Jar validation bash 
scripts don't work on Windows due to platform differences (colons in paths, 
\r\n))

> Ensure jar validation works on Windows.
> ---
>
> Key: HADOOP-15998
> URL: https://issues.apache.org/jira/browse/HADOOP-15998
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.2.0, 3.3.0
> Environment: Windows 10
> Visual Studio 2017
>Reporter: Brian Grunkemeyer
>Assignee: Brian Grunkemeyer
>Priority: Blocker
>  Labels: build, windows
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HADOOP-15998.5.patch, HADOOP-15998.v4.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Building Hadoop fails on Windows due to a few shell scripts that make invalid 
> assumptions:
> 1) Colon shouldn't be used to separate multiple paths in command line 
> parameters. Colons occur in Windows paths.
> 2) Shell scripts that rely on running external tools need to deal with 
> carriage return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2019-08-29 Thread Sean Busbey (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-15998:
-
Fix Version/s: 3.1.3
   3.2.1
   3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Jar validation bash scripts don't work on Windows due to platform differences 
> (colons in paths, \r\n)
> -
>
> Key: HADOOP-15998
> URL: https://issues.apache.org/jira/browse/HADOOP-15998
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.2.0, 3.3.0
> Environment: Windows 10
> Visual Studio 2017
>Reporter: Brian Grunkemeyer
>Assignee: Brian Grunkemeyer
>Priority: Blocker
>  Labels: build, windows
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HADOOP-15998.5.patch, HADOOP-15998.v4.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Building Hadoop fails on Windows due to a few shell scripts that make invalid 
> assumptions:
> 1) Colon shouldn't be used to separate multiple paths in command line 
> parameters. Colons occur in Windows paths.
> 2) Shell scripts that rely on running external tools need to deal with 
> carriage return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2019-08-28 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16917744#comment-16917744
 ] 

Sean Busbey commented on HADOOP-15998:
--

Sorry, I read [~abmodi]'s comment as a statement about the patch's 
functionality and not necessarily agreement that it should be committed.

I'll push this later today.

> Jar validation bash scripts don't work on Windows due to platform differences 
> (colons in paths, \r\n)
> -
>
> Key: HADOOP-15998
> URL: https://issues.apache.org/jira/browse/HADOOP-15998
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.2.0, 3.3.0
> Environment: Windows 10
> Visual Studio 2017
>Reporter: Brian Grunkemeyer
>Assignee: Brian Grunkemeyer
>Priority: Blocker
>  Labels: build, windows
> Attachments: HADOOP-15998.5.patch, HADOOP-15998.v4.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Building Hadoop fails on Windows due to a few shell scripts that make invalid 
> assumptions:
> 1) Colon shouldn't be used to separate multiple paths in command line 
> parameters. Colons occur in Windows paths.
> 2) Shell scripts that rely on running external tools need to deal with 
> carriage return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2019-08-27 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916650#comment-16916650
 ] 

Sean Busbey commented on HADOOP-15998:
--

I'm all for it. Do I have a +1?

> Jar validation bash scripts don't work on Windows due to platform differences 
> (colons in paths, \r\n)
> -
>
> Key: HADOOP-15998
> URL: https://issues.apache.org/jira/browse/HADOOP-15998
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.2.0, 3.3.0
> Environment: Windows 10
> Visual Studio 2017
>Reporter: Brian Grunkemeyer
>Assignee: Brian Grunkemeyer
>Priority: Blocker
>  Labels: build, windows
> Attachments: HADOOP-15998.5.patch, HADOOP-15998.v4.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Building Hadoop fails on Windows due to a few shell scripts that make invalid 
> assumptions:
> 1) Colon shouldn't be used to separate multiple paths in command line 
> parameters. Colons occur in Windows paths.
> 2) Shell scripts that rely on running external tools need to deal with 
> carriage return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2019-08-26 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16915797#comment-16915797
 ] 

Sean Busbey commented on HADOOP-15998:
--

Thanks Rohith.

[~briangru] or [~giovanni.fumarola] either of y'all up for trying out the 
current patch?

> Jar validation bash scripts don't work on Windows due to platform differences 
> (colons in paths, \r\n)
> -
>
> Key: HADOOP-15998
> URL: https://issues.apache.org/jira/browse/HADOOP-15998
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.2.0, 3.3.0
> Environment: Windows 10
> Visual Studio 2017
>Reporter: Brian Grunkemeyer
>Assignee: Brian Grunkemeyer
>Priority: Blocker
>  Labels: build, windows
> Attachments: HADOOP-15998.5.patch, HADOOP-15998.v4.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Building Hadoop fails on Windows due to a few shell scripts that make invalid 
> assumptions:
> 1) Colon shouldn't be used to separate multiple paths in command line 
> parameters. Colons occur in Windows paths.
> 2) Shell scripts that rely on running external tools need to deal with 
> carriage return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2019-08-25 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16915406#comment-16915406
 ] 

Sean Busbey commented on HADOOP-15998:
--

I started trying to test this on my windows 10 machine, but it's taking some 
time to get through our windows build instructions.

If someone already has a windows build environment ready to go it'd help a ton 
of they could check on this patch.

> Jar validation bash scripts don't work on Windows due to platform differences 
> (colons in paths, \r\n)
> -
>
> Key: HADOOP-15998
> URL: https://issues.apache.org/jira/browse/HADOOP-15998
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.2.0, 3.3.0
> Environment: Windows 10
> Visual Studio 2017
>Reporter: Brian Grunkemeyer
>Assignee: Brian Grunkemeyer
>Priority: Blocker
>  Labels: build, windows
> Attachments: HADOOP-15998.5.patch, HADOOP-15998.v4.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Building Hadoop fails on Windows due to a few shell scripts that make invalid 
> assumptions:
> 1) Colon shouldn't be used to separate multiple paths in command line 
> parameters. Colons occur in Windows paths.
> 2) Shell scripts that rely on running external tools need to deal with 
> carriage return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2019-08-23 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16914675#comment-16914675
 ] 

Sean Busbey commented on HADOOP-15998:
--

v5

- get all the artifacts passed instead of the first
- error if no artifacts
- fail if any command fails
- change reading of jar contents to avoid shell handling
- added a bash 3.1 check that we should have had before

tested locally on mac. verified that all jars are being tested, that things 
fail on {{jar}} command failure, that things fail when something isn't 
relocated, and that things pass correctly.

> Jar validation bash scripts don't work on Windows due to platform differences 
> (colons in paths, \r\n)
> -
>
> Key: HADOOP-15998
> URL: https://issues.apache.org/jira/browse/HADOOP-15998
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.2.0, 3.3.0
> Environment: Windows 10
> Visual Studio 2017
>Reporter: Brian Grunkemeyer
>Assignee: Brian Grunkemeyer
>Priority: Blocker
>  Labels: build, windows
> Attachments: HADOOP-15998.5.patch, HADOOP-15998.v4.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Building Hadoop fails on Windows due to a few shell scripts that make invalid 
> assumptions:
> 1) Colon shouldn't be used to separate multiple paths in command line 
> parameters. Colons occur in Windows paths.
> 2) Shell scripts that rely on running external tools need to deal with 
> carriage return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2019-08-23 Thread Sean Busbey (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-15998:
-
Attachment: HADOOP-15998.5.patch

> Jar validation bash scripts don't work on Windows due to platform differences 
> (colons in paths, \r\n)
> -
>
> Key: HADOOP-15998
> URL: https://issues.apache.org/jira/browse/HADOOP-15998
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.2.0, 3.3.0
> Environment: Windows 10
> Visual Studio 2017
>Reporter: Brian Grunkemeyer
>Assignee: Brian Grunkemeyer
>Priority: Blocker
>  Labels: build, windows
> Attachments: HADOOP-15998.5.patch, HADOOP-15998.v4.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Building Hadoop fails on Windows due to a few shell scripts that make invalid 
> assumptions:
> 1) Colon shouldn't be used to separate multiple paths in command line 
> parameters. Colons occur in Windows paths.
> 2) Shell scripts that rely on running external tools need to deal with 
> carriage return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2019-08-23 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16914669#comment-16914669
 ] 

Sean Busbey commented on HADOOP-15998:
--

v4 as is doesn't work correctly. I have an amended patch about ready.

> Jar validation bash scripts don't work on Windows due to platform differences 
> (colons in paths, \r\n)
> -
>
> Key: HADOOP-15998
> URL: https://issues.apache.org/jira/browse/HADOOP-15998
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.2.0, 3.3.0
> Environment: Windows 10
> Visual Studio 2017
>Reporter: Brian Grunkemeyer
>Assignee: Brian Grunkemeyer
>Priority: Blocker
>  Labels: build, windows
> Attachments: HADOOP-15998.v4.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Building Hadoop fails on Windows due to a few shell scripts that make invalid 
> assumptions:
> 1) Colon shouldn't be used to separate multiple paths in command line 
> parameters. Colons occur in Windows paths.
> 2) Shell scripts that rely on running external tools need to deal with 
> carriage return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2019-08-19 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16910512#comment-16910512
 ] 

Sean Busbey commented on HADOOP-15998:
--

the current qa result looks promising. I haven't reviewed since December 2018. 
Presuming v4 fixes the integration tests to actually fail and the issue that 
was failing before, it looks like just a few shellcheck warnings to clean up 
before this is good to go.

> Jar validation bash scripts don't work on Windows due to platform differences 
> (colons in paths, \r\n)
> -
>
> Key: HADOOP-15998
> URL: https://issues.apache.org/jira/browse/HADOOP-15998
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.2.0, 3.3.0
> Environment: Windows 10
> Visual Studio 2017
>Reporter: Brian Grunkemeyer
>Assignee: Brian Grunkemeyer
>Priority: Blocker
>  Labels: build, windows
> Fix For: 3.3.0
>
> Attachments: HADOOP-15998.v4.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Building Hadoop fails on Windows due to a few shell scripts that make invalid 
> assumptions:
> 1) Colon shouldn't be used to separate multiple paths in command line 
> parameters. Colons occur in Windows paths.
> 2) Shell scripts that rely on running external tools need to deal with 
> carriage return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16219) [JDK8] Set minimum version of Hadoop 2 to JDK 8

2019-04-10 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814634#comment-16814634
 ] 

Sean Busbey edited comment on HADOOP-16219 at 4/10/19 4:46 PM:
---

HBase abandoned staying on the same version as Hadoop a year or two ago. If we 
upgrade guava on a branch where we support Java 7 we'd do it via the -android 
flavor.

We also test on jenkins with Java 7 by using docker + an Azul jdk. "has to" is 
overly strong phrasing, though I get your point. :)


was (Author: busbey):
HBase abandoned staying on the same version as Hadoop a year or two ago. If we 
upgrade android on a Java 7 we'd do it via the -android flavor.

We also test on jenkins with Java 7 by using docker + an Azul jdk. "has to" is 
overly strong phrasing, though I get your point. :)

> [JDK8] Set minimum version of Hadoop 2 to JDK 8
> ---
>
> Key: HADOOP-16219
> URL: https://issues.apache.org/jira/browse/HADOOP-16219
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.10.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16219-branch-2-001.patch
>
>
> Java 7 is long EOL; having branch-2 require it is simply making the release 
> process a pain (we aren't building, testing, or releasing on java 7 JVMs any 
> more, are we?). 
> Staying on java 7 complicates backporting, JAR updates for CVEs (hello 
> Guava!)  are becoming impossible.
> Proposed: increment javac.version = 1.8



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16219) [JDK8] Set minimum version of Hadoop 2 to JDK 8

2019-04-10 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814634#comment-16814634
 ] 

Sean Busbey commented on HADOOP-16219:
--

HBase abandoned staying on the same version as Hadoop a year or two ago. If we 
upgrade android on a Java 7 we'd do it via the -android flavor.

We also test on jenkins with Java 7 by using docker + an Azul jdk. "has to" is 
overly strong phrasing, though I get your point. :)

> [JDK8] Set minimum version of Hadoop 2 to JDK 8
> ---
>
> Key: HADOOP-16219
> URL: https://issues.apache.org/jira/browse/HADOOP-16219
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.10.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16219-branch-2-001.patch
>
>
> Java 7 is long EOL; having branch-2 require it is simply making the release 
> process a pain (we aren't building, testing, or releasing on java 7 JVMs any 
> more, are we?). 
> Staying on java 7 complicates backporting, JAR updates for CVEs (hello 
> Guava!)  are becoming impossible.
> Proposed: increment javac.version = 1.8



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16219) [JDK8] Set minimum version of Hadoop 2 to JDK 8

2019-04-05 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16810940#comment-16810940
 ] 

Sean Busbey commented on HADOOP-16219:
--

Also Guava still maintains JDK7 compatible updates as of 27.1 it's the -android 
flavor. ([see the explanation of "flavors" of guava in the project 
README|https://github.com/google/guava/blob/master/README.md])

> [JDK8] Set minimum version of Hadoop 2 to JDK 8
> ---
>
> Key: HADOOP-16219
> URL: https://issues.apache.org/jira/browse/HADOOP-16219
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.10.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16219-branch-2-001.patch
>
>
> Java 7 is long EOL; having branch-2 require it is simply making the release 
> process a pain (we aren't building, testing, or releasing on java 7 JVMs any 
> more, are we?). 
> Staying on java 7 complicates backporting, JAR updates for CVEs (hello 
> Guava!)  are becoming impossible.
> Proposed: increment javac.version = 1.8



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16219) [JDK8] Set minimum version of Hadoop 2 to JDK 8

2019-04-05 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16810863#comment-16810863
 ] 

Sean Busbey commented on HADOOP-16219:
--

for example, Red Hat still supports JDK7 through June 2020:

https://access.redhat.com/articles/1299013

> [JDK8] Set minimum version of Hadoop 2 to JDK 8
> ---
>
> Key: HADOOP-16219
> URL: https://issues.apache.org/jira/browse/HADOOP-16219
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.10.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16219-branch-2-001.patch
>
>
> Java 7 is long EOL; having branch-2 require it is simply making the release 
> process a pain (we aren't building, testing, or releasing on java 7 JVMs any 
> more, are we?). 
> Staying on java 7 complicates backporting, JAR updates for CVEs (hello 
> Guava!)  are becoming impossible.
> Proposed: increment javac.version = 1.8



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16219) [JDK8] Set minimum version of Hadoop 2 to JDK 8

2019-04-05 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16810858#comment-16810858
 ] 

Sean Busbey commented on HADOOP-16219:
--

{quote}
bq.  just to be clear, this flies directly in the face of our compatibility 
guidelines by being an incompatible change in a minor version release, right?

well it would be, if we didn't explicitly call out JVM EOL as something that 
can force an update
{quote}

EOL from who though? Aren't there still folks offering JDK7 releases?

This is going to make things unpleasant for HBase as a downstreamer since we've 
been trying to maintain jdk7 on our stable release branch and relying on Hadoop 
2 releases is part of how we've done that.

> [JDK8] Set minimum version of Hadoop 2 to JDK 8
> ---
>
> Key: HADOOP-16219
> URL: https://issues.apache.org/jira/browse/HADOOP-16219
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.10.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16219-branch-2-001.patch
>
>
> Java 7 is long EOL; having branch-2 require it is simply making the release 
> process a pain (we aren't building, testing, or releasing on java 7 JVMs any 
> more, are we?). 
> Staying on java 7 complicates backporting, JAR updates for CVEs (hello 
> Guava!)  are becoming impossible.
> Proposed: increment javac.version = 1.8



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16183) Use latest Yetus to support ozone specific build process

2019-04-02 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16807773#comment-16807773
 ] 

Sean Busbey commented on HADOOP-16183:
--

bq. As far I understand it's not forbidden to use any snapshot version but it 
shouldn't be publish for wider audience.

so long as "wider audience" means folks outside of the community defined by the 
dev@ mailing list of a given project, we're in agreement. The entire Hadoop 
project is not on that mailing list, so it should not be our regular practice 
to rely on a non-released version.

bq. In fact Hadoop used unreleased (or even forked) yetus multiple times. (You 
can check the config history: 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/jobConfigHistory/showDiffFiles?timestamp1=2018-07-10_14-47-51=2018-09-01_21-39-22)

AFAICT someone who is in both the Yetus and Hadoop communities was helpful 
enough to make sure that various fixes worked for our project. If you're 
attempting to see if the change in yetus actually does what you want and you're 
subscribed to dev@yetus, you are welcome to use individual builds to verify 
things as well. The proposed change here would move Hadoop to rely on a 
non-released version continuously. Our use of yetus is a black box to most of 
the Hadoop community, and I don't want folks new to it that need to ask 
questions of the yetus community to show up on non-released artifacts.

bq. But I respect your opinion, I will ask for a release on the yetus-dev list.

thanks!

{quote}
BTW. Wouldn't be better to store the hadoop personality in the hadoop 
repository? According to your comment we need a release from an other project 
(yetus) to change anything in the build definition/personality. (cc Allen 
Wittenauer)
{quote}

I don't have an opinion on how that personality is maintained since I don't put 
work into maintaining it.

> Use latest Yetus to support ozone specific build process
> 
>
> Key: HADOOP-16183
> URL: https://issues.apache.org/jira/browse/HADOOP-16183
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> In YETUS-816 the hadoop personality is improved to better support ozone 
> specific changes.
> Unfortunately the hadoop personality is part of the Yetus project and not the 
> Hadoop project: we need a new yetus release or switch to an unreleased 
> version.
> In this patch I propose to use the latest commit from yetus (but use that 
> fixed commit instead updating all the time). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16183) Use latest Yetus to support ozone specific build process

2019-03-26 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16802331#comment-16802331
 ] 

Sean Busbey commented on HADOOP-16183:
--

We should ask for a Yetus release. We're a downstream user of that project. We 
know that as an ASF project Yetus isn't supposed to allow downstreamers to 
consume non-released stuff.

> Use latest Yetus to support ozone specific build process
> 
>
> Key: HADOOP-16183
> URL: https://issues.apache.org/jira/browse/HADOOP-16183
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> In YETUS-816 the hadoop personality is improved to better support ozone 
> specific changes.
> Unfortunately the hadoop personality is part of the Yetus project and not the 
> Hadoop project: we need a new yetus release or switch to an unreleased 
> version.
> In this patch I propose to use the latest commit from yetus (but use that 
> fixed commit instead updating all the time). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15387) Produce a shaded hadoop-cloud-storage JAR for applications to use

2019-01-30 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756273#comment-16756273
 ] 

Sean Busbey commented on HADOOP-15387:
--

let me think on this a bit. I think the hadoop-common thing is fixable without 
too much heartburn.

the goal is a downstreamer adds {{org.apache.hadoop:hadoop-cloud-storage}} as a 
dependency and things work right? (again ignoring some specifics around "we 
need these logging frameworks" etc)

is "things work" just for "I'm using FileSystem APIs to access the cloud 
storage system X"? or is it some other subset of downstream facing APIs? I know 
your original description said this should pull in the hadoop-client stuff, but 
would it be too much to instead say use of {{hadoop-cloud-storage}} always 
required {{hadoop-client-api}} and {{hadoop-client-runtime}}? Specifically as 
transitive dependencies, not like we'd make folks always add 3 entries to 
maven? (though I suspect most practical uses will require listing one of those 
directly if folks use {{dependency:analyze}})

would the individual SDKs being optional be too onerous? essentially it would 
mean everyone would add {{hadoop-cloud-storage}} and they'd add the SDK(s) for 
whichever of the implementation they were going to actually use. Or is the 
common use case here the opposite? like most downstream users will want to 
opt-out via maven exclusions rather than needing to opt-in? opt-in would mean 
we could make it so only folks who specifically want to work with a provider 
who's SDK leaks dependencies would be impacted by that leakage (and we'd keep 
fixing it the problem of the SDK provider and not us).

> Produce a shaded hadoop-cloud-storage JAR for applications to use
> -
>
> Key: HADOOP-15387
> URL: https://issues.apache.org/jira/browse/HADOOP-15387
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/adl, fs/azure, fs/oss, fs/s3, fs/swift
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Priority: Major
>
> Produce a maven-shaded hadoop-cloudstorage JAR for dowstream use so that
>  * Hadoop dependency choices don't control their decisions
>  * Little/No risk of their JAR changes breaking Hadoop bits they depend on
> This JAR would pull in the shaded hadoop-client JAR, and the aws-sdk-bundle 
> JAR, neither of which would be unshaded (so yes, upgrading aws-sdks would be 
> a bit risky, but double shading a pre-shaded 30MB JAR is excessive on 
> multiple levels.
> Metrics of success: Spark, Tez, Flink etc can pick up and use, and all are 
> happy



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15387) Produce a shaded hadoop-cloud-storage JAR for applications to use

2019-01-29 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755284#comment-16755284
 ] 

Sean Busbey commented on HADOOP-15387:
--

[~ste...@apache.org] can you help me on understanding scope here a bit?

I think what the description says is we end up with a single jar where all 
classes are in {{org.apache.hadoop}} or {{software.amazon.awssdk}} and we rely 
on shading to relocate any others (modulo the normal caveats on logging / 
tracing libraries that came up during the hadoop-client modules).

Does it need to be all of the Amazon AWS SDK? Is there some interface jar that 
we could use while allowing BYO-SDK? Or for that matter could we just update 
the various cloud storage modules to individually relocate things that aren't 
either hadoop-client-facing or their respective service's SDK?

> Produce a shaded hadoop-cloud-storage JAR for applications to use
> -
>
> Key: HADOOP-15387
> URL: https://issues.apache.org/jira/browse/HADOOP-15387
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/adl, fs/azure, fs/oss, fs/s3, fs/swift
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Priority: Major
>
> Produce a maven-shaded hadoop-cloudstorage JAR for dowstream use so that
>  * Hadoop dependency choices don't control their decisions
>  * Little/No risk of their JAR changes breaking Hadoop bits they depend on
> This JAR would pull in the shaded hadoop-client JAR, and the aws-sdk-bundle 
> JAR, neither of which would be unshaded (so yes, upgrading aws-sdks would be 
> a bit risky, but double shading a pre-shaded 30MB JAR is excessive on 
> multiple levels.
> Metrics of success: Spark, Tez, Flink etc can pick up and use, and all are 
> happy



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15387) Produce a shaded hadoop-cloud-storage JAR for applications to use

2019-01-29 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755284#comment-16755284
 ] 

Sean Busbey edited comment on HADOOP-15387 at 1/29/19 6:55 PM:
---

[~ste...@apache.org] can you help me on understanding scope here a bit?

I think what the description says is we end up with a single jar where all 
classes are in {{org.apache.hadoop}} or {{software.amazon.awssdk}} (or whatever 
their package space is) and we rely on shading to relocate any others (modulo 
the normal caveats on logging / tracing libraries that came up during the 
hadoop-client modules).

Does it need to be all of the Amazon AWS SDK? Is there some interface jar that 
we could use while allowing BYO-SDK? Or for that matter could we just update 
the various cloud storage modules to individually relocate things that aren't 
either hadoop-client-facing or their respective service's SDK?


was (Author: busbey):
[~ste...@apache.org] can you help me on understanding scope here a bit?

I think what the description says is we end up with a single jar where all 
classes are in {{org.apache.hadoop}} or {{software.amazon.awssdk}} and we rely 
on shading to relocate any others (modulo the normal caveats on logging / 
tracing libraries that came up during the hadoop-client modules).

Does it need to be all of the Amazon AWS SDK? Is there some interface jar that 
we could use while allowing BYO-SDK? Or for that matter could we just update 
the various cloud storage modules to individually relocate things that aren't 
either hadoop-client-facing or their respective service's SDK?

> Produce a shaded hadoop-cloud-storage JAR for applications to use
> -
>
> Key: HADOOP-15387
> URL: https://issues.apache.org/jira/browse/HADOOP-15387
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/adl, fs/azure, fs/oss, fs/s3, fs/swift
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Priority: Major
>
> Produce a maven-shaded hadoop-cloudstorage JAR for dowstream use so that
>  * Hadoop dependency choices don't control their decisions
>  * Little/No risk of their JAR changes breaking Hadoop bits they depend on
> This JAR would pull in the shaded hadoop-client JAR, and the aws-sdk-bundle 
> JAR, neither of which would be unshaded (so yes, upgrading aws-sdks would be 
> a bit risky, but double shading a pre-shaded 30MB JAR is excessive on 
> multiple levels.
> Metrics of success: Spark, Tez, Flink etc can pick up and use, and all are 
> happy



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16061) Update Apache Yetus to 0.9.0

2019-01-26 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16753133#comment-16753133
 ] 

Sean Busbey commented on HADOOP-16061:
--

could the hadoop-yetus account just use common-issues@hadoop as its email 
address? does the bot need the ability to send emails?

> Update Apache Yetus to 0.9.0
> 
>
> Key: HADOOP-16061
> URL: https://issues.apache.org/jira/browse/HADOOP-16061
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
>
> Yetus 0.9.0 is out. Let's upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15978) Add Netty support to the RPC server

2019-01-08 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16737799#comment-16737799
 ] 

Sean Busbey commented on HADOOP-15978:
--

Over in Apache HBase we've been relocating Netty 4 since our 2.0 release and 
IIRC there was a bunch more work needed to isolate it because of a bundled 
{{.so}}. What's different here that let's us skip it?

see for example [hbase-thirdparty relocating the 
.so|https://github.com/apache/hbase-thirdparty/blob/rel/2.1.0/hbase-shaded-netty/pom.xml#L97]

> Add Netty support to the RPC server
> ---
>
> Key: HADOOP-15978
> URL: https://issues.apache.org/jira/browse/HADOOP-15978
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, security
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Major
> Attachments: HADOOP-15978.patch, HADOOP-15978.shade.patch
>
>
> Adding Netty will allow later using a native TLS transport layer with much 
> better performance than that offered by Java's SSLEngine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15978) Add Netty support to the RPC server

2019-01-08 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16737779#comment-16737779
 ] 

Sean Busbey commented on HADOOP-15978:
--

to be clear, I'm -1 on including relocated classes in the package {{hrpc}} 
unless there's some class name length limitation stopping us from using 
{{org.apache.hadoop.shaded}}.

> Add Netty support to the RPC server
> ---
>
> Key: HADOOP-15978
> URL: https://issues.apache.org/jira/browse/HADOOP-15978
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, security
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Major
> Attachments: HADOOP-15978.patch, HADOOP-15978.shade.patch
>
>
> Adding Netty will allow later using a native TLS transport layer with much 
> better performance than that offered by Java's SSLEngine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15978) Add Netty support to the RPC server

2019-01-08 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1673#comment-1673
 ] 

Sean Busbey commented on HADOOP-15978:
--

yes, we should only be exposing downstream to classes that are within our java 
package space, which means {{org.apache.hadoop}}. Preferably we would always 
make clear when we're relocating things and have it be 
{{org.apache.hadoop.shaded}}. That's why our own build process checks for 
classes in other package spaces and complains.

> Add Netty support to the RPC server
> ---
>
> Key: HADOOP-15978
> URL: https://issues.apache.org/jira/browse/HADOOP-15978
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, security
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Major
> Attachments: HADOOP-15978.patch, HADOOP-15978.shade.patch
>
>
> Adding Netty will allow later using a native TLS transport layer with much 
> better performance than that offered by Java's SSLEngine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2018-12-13 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-15998:
-
Status: In Progress  (was: Patch Available)

moving out of Patch Available status pending fix of the maven integration test 
for jar contents.

> Jar validation bash scripts don't work on Windows due to platform differences 
> (colons in paths, \r\n)
> -
>
> Key: HADOOP-15998
> URL: https://issues.apache.org/jira/browse/HADOOP-15998
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.2.0, 3.3.0
> Environment: Windows 10
> Visual Studio 2017
>Reporter: Brian Grunkemeyer
>Priority: Blocker
>  Labels: build, windows
> Fix For: 3.3.0
>
> Attachments: HADOOP-15998.v2.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Building Hadoop fails on Windows due to a few shell scripts that make invalid 
> assumptions:
> 1) Colon shouldn't be used to separate multiple paths in command line 
> parameters. Colons occur in Windows paths.
> 2) Shell scripts that rely on running external tools need to deal with 
> carriage return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2018-12-13 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720337#comment-16720337
 ] 

Sean Busbey commented on HADOOP-15998:
--

[~briangru] I added you to the contributor role on the Hadoop jira trackers 
(looks like you were only listed on YARN before), so you should be able to 
assign this jira to yourself now.

> Jar validation bash scripts don't work on Windows due to platform differences 
> (colons in paths, \r\n)
> -
>
> Key: HADOOP-15998
> URL: https://issues.apache.org/jira/browse/HADOOP-15998
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.2.0, 3.3.0
> Environment: Windows 10
> Visual Studio 2017
>Reporter: Brian Grunkemeyer
>Priority: Blocker
>  Labels: build, windows
> Fix For: 3.3.0
>
> Attachments: HADOOP-15998.v2.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Building Hadoop fails on Windows due to a few shell scripts that make invalid 
> assumptions:
> 1) Colon shouldn't be used to separate multiple paths in command line 
> parameters. Colons occur in Windows paths.
> 2) Shell scripts that rely on running external tools need to deal with 
> carriage return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2018-12-12 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719660#comment-16719660
 ] 

Sean Busbey commented on HADOOP-15998:
--

on windows the classpath separator is {{;}} which means we should fail 
similarly there once this patch is applied.

> Jar validation bash scripts don't work on Windows due to platform differences 
> (colons in paths, \r\n)
> -
>
> Key: HADOOP-15998
> URL: https://issues.apache.org/jira/browse/HADOOP-15998
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.2.0, 3.3.0
> Environment: Windows 10
> Visual Studio 2017
>Reporter: Brian Grunkemeyer
>Priority: Blocker
>  Labels: build, windows
> Fix For: 3.3.0
>
> Attachments: HADOOP-15998.v2.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Building Hadoop fails on Windows due to a few shell scripts that make invalid 
> assumptions:
> 1) Colon shouldn't be used to separate multiple paths in command line 
> parameters. Colons occur in Windows paths.
> 2) Shell scripts that rely on running external tools need to deal with 
> carriage return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2018-12-12 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-15998:
-
Labels: build windows  (was: build newbie windows)

> Jar validation bash scripts don't work on Windows due to platform differences 
> (colons in paths, \r\n)
> -
>
> Key: HADOOP-15998
> URL: https://issues.apache.org/jira/browse/HADOOP-15998
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.2.0, 3.3.0
> Environment: Windows 10
> Visual Studio 2017
>Reporter: Brian Grunkemeyer
>Priority: Blocker
>  Labels: build, windows
> Fix For: 3.3.0
>
> Attachments: HADOOP-15998.v2.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Building Hadoop fails on Windows due to a few shell scripts that make invalid 
> assumptions:
> 1) Colon shouldn't be used to separate multiple paths in command line 
> parameters. Colons occur in Windows paths.
> 2) Shell scripts that rely on running external tools need to deal with 
> carriage return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2018-12-12 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719649#comment-16719649
 ] 

Sean Busbey commented on HADOOP-15998:
--

okay the integration tests do show issues but we aren't properly recognizing it.

Here's the branch version in precommit above:
https://builds.apache.org/job/PreCommit-HADOOP-Build/15643/artifact/out/branch-shadedclient.txt/*view*/
{code}
[INFO] --- maven-dependency-plugin:3.0.2:build-classpath 
(put-client-artifacts-in-a-property) @ hadoop-client-check-invariants ---
[INFO] Dependencies classpath:
/testptch/hadoop/hadoop-client-modules/hadoop-client-api/target/hadoop-client-api-3.3.0-SNAPSHOT.jar:/testptch/hadoop/hadoop-client-modules/hadoop-client-runtime/target/hadoop-client-runtime-3.3.0-SNAPSHOT.jar
[INFO] 
[INFO] --- exec-maven-plugin:1.3.1:exec (check-jar-contents) @ 
hadoop-client-check-invariants ---
[INFO] Artifact looks correct: 'hadoop-client-api-3.3.0-SNAPSHOT.jar'
[INFO] Artifact looks correct: 'hadoop-client-runtime-3.3.0-SNAPSHOT.jar'
[INFO] 
{code}

Here's after the patch has been applied:
https://builds.apache.org/job/PreCommit-HADOOP-Build/15643/artifact/out/patch-shadedclient.txt/*view*/
{code}
[INFO] --- maven-dependency-plugin:3.0.2:build-classpath 
(put-client-artifacts-in-a-property) @ hadoop-client-check-invariants ---
[INFO] Dependencies classpath:
/testptch/hadoop/hadoop-client-modules/hadoop-client-api/target/hadoop-client-api-3.3.0-SNAPSHOT.jar:/testptch/hadoop/hadoop-client-modules/hadoop-client-runtime/target/hadoop-client-runtime-3.3.0-SNAPSHOT.jar
[INFO] 
[INFO] --- exec-maven-plugin:1.3.1:exec (check-jar-contents) @ 
hadoop-client-check-invariants ---
java.io.FileNotFoundException: 
/testptch/hadoop/hadoop-client-modules/hadoop-client-api/target/hadoop-client-api-3.3.0-SNAPSHOT.jar:/testptch/hadoop/hadoop-client-modules/hadoop-client-runtime/target/hadoop-client-runtime-3.3.0-SNAPSHOT.jar
 (No such file or directory)
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.(ZipFile.java:225)
at java.util.zip.ZipFile.(ZipFile.java:155)
at java.util.zip.ZipFile.(ZipFile.java:126)
at sun.tools.jar.Main.list(Main.java:1115)
at sun.tools.jar.Main.run(Main.java:293)
at sun.tools.jar.Main.main(Main.java:1288)
[INFO] Artifact looks correct: 'hadoop-client-runtime-3.3.0-SNAPSHOT.jar'
[INFO] 
{code}

Please fix this before commit. Ideally also figure out why the build didn't 
actually fail and fix that.

> Jar validation bash scripts don't work on Windows due to platform differences 
> (colons in paths, \r\n)
> -
>
> Key: HADOOP-15998
> URL: https://issues.apache.org/jira/browse/HADOOP-15998
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.2.0, 3.3.0
> Environment: Windows 10
> Visual Studio 2017
>Reporter: Brian Grunkemeyer
>Priority: Blocker
>  Labels: build, newbie, windows
> Fix For: 3.3.0
>
> Attachments: HADOOP-15998.v2.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Building Hadoop fails on Windows due to a few shell scripts that make invalid 
> assumptions:
> 1) Colon shouldn't be used to separate multiple paths in command line 
> parameters. Colons occur in Windows paths.
> 2) Shell scripts that rely on running external tools need to deal with 
> carriage return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2018-12-12 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719643#comment-16719643
 ] 

Sean Busbey commented on HADOOP-15998:
--

It looks like this only alters the scripts. how do the integration tests still 
pass? I'm presuming they pass multiple jars? Has it coincidentally just been 
sending a single jar?

> Jar validation bash scripts don't work on Windows due to platform differences 
> (colons in paths, \r\n)
> -
>
> Key: HADOOP-15998
> URL: https://issues.apache.org/jira/browse/HADOOP-15998
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.2.0, 3.3.0
> Environment: Windows 10
> Visual Studio 2017
>Reporter: Brian Grunkemeyer
>Priority: Blocker
>  Labels: build, newbie, windows
> Fix For: 3.3.0
>
> Attachments: HADOOP-15998.v2.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Building Hadoop fails on Windows due to a few shell scripts that make invalid 
> assumptions:
> 1) Colon shouldn't be used to separate multiple paths in command line 
> parameters. Colons occur in Windows paths.
> 2) Shell scripts that rely on running external tools need to deal with 
> carriage return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15971) website repo is missing a LICENSE / NOTICE

2018-12-04 Thread Sean Busbey (JIRA)
Sean Busbey created HADOOP-15971:


 Summary: website repo is missing a LICENSE / NOTICE
 Key: HADOOP-15971
 URL: https://issues.apache.org/jira/browse/HADOOP-15971
 Project: Hadoop Common
  Issue Type: Task
  Components: website
Reporter: Sean Busbey


Our website repo needs to have a LICENSE and NOTICE file at the top level:

 

[https://github.com/apache/hadoop-site]

 

currently it has neither.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15566) Remove HTrace support

2018-11-30 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705267#comment-16705267
 ] 

Sean Busbey commented on HADOOP-15566:
--

bq. Sean Busby did a lot of work on shading the Hadoop CP --targeting HBase, 
but it's not been rounded off with all the hadoop-tools modules yet, including 
the cloud storage connectors. Someone needs to volunteer to embrace shading

I don't want to get this jira sidetracked, but could you point me at more 
details on the gap here? I was under the assumption that hadoop-tools stuff was 
project internal and thus didn't need shading.

In the downstream facing shading we expressly don't shade HTrace because doing 
so breaks some of its functionality (tracing from application through libraries 
within the same JVM).

> Remove HTrace support
> -
>
> Key: HADOOP-15566
> URL: https://issues.apache.org/jira/browse/HADOOP-15566
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 3.1.0
>Reporter: Todd Lipcon
>Priority: Major
>  Labels: security
> Attachments: Screen Shot 2018-06-29 at 11.59.16 AM.png, 
> ss-trace-s3a.png
>
>
> The HTrace incubator project has voted to retire itself and won't be making 
> further releases. The Hadoop project currently has various hooks with HTrace. 
> It seems in some cases (eg HDFS-13702) these hooks have had measurable 
> performance overhead. Given these two factors, I think we should consider 
> removing the HTrace integration. If there is someone willing to do the work, 
> replacing it with OpenTracing might be a better choice since there is an 
> active community.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15939) Filter overlapping objenesis class in hadoop-client-minicluster

2018-11-19 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691800#comment-16691800
 ] 

Sean Busbey commented on HADOOP-15939:
--

Looking at [the source for 
mockito-1.8.5|https://github.com/mockito/mockito/tree/v1.8.5] I think it's 
objenesis 1.0 they're shipping.

+1, this looks like the right approach to me. I'd prefer it if the QA bot 
showed a failure for the overlapping classes, but that can be its own issue.

> Filter overlapping objenesis class in hadoop-client-minicluster 
> 
>
> Key: HADOOP-15939
> URL: https://issues.apache.org/jira/browse/HADOOP-15939
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HADOOP-15939.001.patch
>
>
> As mentioned here and found in with latest Jenkins shadedclient.
> Jenkins does not provide a detailed output file for the failure though. But 
> it can be reproed with the following cmd:
> {code:java}
> mvn verify -fae --batch-mode -am -pl 
> hadoop-client-modules/hadoop-client-check-invariants -pl 
> hadoop-client-modules/hadoop-client-check-test-invariants -pl 
> hadoop-client-modules/hadoop-client-integration-tests -Dtest=NoUnitTests 
> -Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true
> {code}
> Error Message:
> {code:java}
> [WARNING] objenesis-1.0.jar, mockito-all-1.8.5.jar define 30 overlapping 
> classes: 
> [WARNING]   - org.objenesis.ObjenesisBase
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiator
> [WARNING]   - org.objenesis.ObjenesisHelper
> [WARNING]   - org.objenesis.instantiator.jrockit.JRockitLegacyInstantiator
> [WARNING]   - org.objenesis.instantiator.sun.SunReflectionFactoryInstantiator
> [WARNING]   - org.objenesis.instantiator.ObjectInstantiator
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiatorBase$DummyStream
> [WARNING]   - org.objenesis.instantiator.basic.ObjectStreamClassInstantiator
> [WARNING]   - org.objenesis.ObjenesisException
> [WARNING]   - org.objenesis.Objenesis
> [WARNING]   - 20 more...
> [WARNING] maven-shade-plugin has detected that some class files are
> [WARNING] present in two or more JARs. When this happens, only one
> [WARNING] single version of the class is copied to the uber jar.
> [WARNING] Usually this is not harmful and you can skip these warnings,
> [WARNING] otherwise try to manually exclude artifacts based on
> [WARNING] mvn dependency:tree -Ddetail=true and the above output.
> [WARNING] See [http://maven.apache.org/plugins/maven-shade-plugin/]
> [INFO] Replacing original artifact with shaded artifact.
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15939) Filter overlapping objenesis class in hadoop-client-minicluster

2018-11-19 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-15939:
-
Priority: Minor  (was: Major)

> Filter overlapping objenesis class in hadoop-client-minicluster 
> 
>
> Key: HADOOP-15939
> URL: https://issues.apache.org/jira/browse/HADOOP-15939
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Attachments: HADOOP-15939.001.patch
>
>
> As mentioned here and found in with latest Jenkins shadedclient.
> Jenkins does not provide a detailed output file for the failure though. But 
> it can be reproed with the following cmd:
> {code:java}
> mvn verify -fae --batch-mode -am -pl 
> hadoop-client-modules/hadoop-client-check-invariants -pl 
> hadoop-client-modules/hadoop-client-check-test-invariants -pl 
> hadoop-client-modules/hadoop-client-integration-tests -Dtest=NoUnitTests 
> -Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true
> {code}
> Error Message:
> {code:java}
> [WARNING] objenesis-1.0.jar, mockito-all-1.8.5.jar define 30 overlapping 
> classes: 
> [WARNING]   - org.objenesis.ObjenesisBase
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiator
> [WARNING]   - org.objenesis.ObjenesisHelper
> [WARNING]   - org.objenesis.instantiator.jrockit.JRockitLegacyInstantiator
> [WARNING]   - org.objenesis.instantiator.sun.SunReflectionFactoryInstantiator
> [WARNING]   - org.objenesis.instantiator.ObjectInstantiator
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiatorBase$DummyStream
> [WARNING]   - org.objenesis.instantiator.basic.ObjectStreamClassInstantiator
> [WARNING]   - org.objenesis.ObjenesisException
> [WARNING]   - org.objenesis.Objenesis
> [WARNING]   - 20 more...
> [WARNING] maven-shade-plugin has detected that some class files are
> [WARNING] present in two or more JARs. When this happens, only one
> [WARNING] single version of the class is copied to the uber jar.
> [WARNING] Usually this is not harmful and you can skip these warnings,
> [WARNING] otherwise try to manually exclude artifacts based on
> [WARNING] mvn dependency:tree -Ddetail=true and the above output.
> [WARNING] See [http://maven.apache.org/plugins/maven-shade-plugin/]
> [INFO] Replacing original artifact with shaded artifact.
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15939) Filter overlapping objenesis class in hadoop-client-minicluster

2018-11-19 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691778#comment-16691778
 ] 

Sean Busbey edited comment on HADOOP-15939 at 11/19/18 2:46 PM:


I can put this in my review queue for Wednesday. is that fast enough?

(*Edit*: Never mind. I'm reviewing now.)


was (Author: busbey):
I can put this in my review queue for Wednesday. is that fast enough?

> Filter overlapping objenesis class in hadoop-client-minicluster 
> 
>
> Key: HADOOP-15939
> URL: https://issues.apache.org/jira/browse/HADOOP-15939
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HADOOP-15939.001.patch
>
>
> As mentioned here and found in with latest Jenkins shadedclient.
> Jenkins does not provide a detailed output file for the failure though. But 
> it can be reproed with the following cmd:
> {code:java}
> mvn verify -fae --batch-mode -am -pl 
> hadoop-client-modules/hadoop-client-check-invariants -pl 
> hadoop-client-modules/hadoop-client-check-test-invariants -pl 
> hadoop-client-modules/hadoop-client-integration-tests -Dtest=NoUnitTests 
> -Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true
> {code}
> Error Message:
> {code:java}
> [WARNING] objenesis-1.0.jar, mockito-all-1.8.5.jar define 30 overlapping 
> classes: 
> [WARNING]   - org.objenesis.ObjenesisBase
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiator
> [WARNING]   - org.objenesis.ObjenesisHelper
> [WARNING]   - org.objenesis.instantiator.jrockit.JRockitLegacyInstantiator
> [WARNING]   - org.objenesis.instantiator.sun.SunReflectionFactoryInstantiator
> [WARNING]   - org.objenesis.instantiator.ObjectInstantiator
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiatorBase$DummyStream
> [WARNING]   - org.objenesis.instantiator.basic.ObjectStreamClassInstantiator
> [WARNING]   - org.objenesis.ObjenesisException
> [WARNING]   - org.objenesis.Objenesis
> [WARNING]   - 20 more...
> [WARNING] maven-shade-plugin has detected that some class files are
> [WARNING] present in two or more JARs. When this happens, only one
> [WARNING] single version of the class is copied to the uber jar.
> [WARNING] Usually this is not harmful and you can skip these warnings,
> [WARNING] otherwise try to manually exclude artifacts based on
> [WARNING] mvn dependency:tree -Ddetail=true and the above output.
> [WARNING] See [http://maven.apache.org/plugins/maven-shade-plugin/]
> [INFO] Replacing original artifact with shaded artifact.
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15939) Filter overlapping objenesis class in hadoop-client-minicluster

2018-11-19 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691788#comment-16691788
 ] 

Sean Busbey commented on HADOOP-15939:
--

lol. the frowny face comment from past-me means probably we have to include it.

What version of objenesis does mockito-all include?

> Filter overlapping objenesis class in hadoop-client-minicluster 
> 
>
> Key: HADOOP-15939
> URL: https://issues.apache.org/jira/browse/HADOOP-15939
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HADOOP-15939.001.patch
>
>
> As mentioned here and found in with latest Jenkins shadedclient.
> Jenkins does not provide a detailed output file for the failure though. But 
> it can be reproed with the following cmd:
> {code:java}
> mvn verify -fae --batch-mode -am -pl 
> hadoop-client-modules/hadoop-client-check-invariants -pl 
> hadoop-client-modules/hadoop-client-check-test-invariants -pl 
> hadoop-client-modules/hadoop-client-integration-tests -Dtest=NoUnitTests 
> -Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true
> {code}
> Error Message:
> {code:java}
> [WARNING] objenesis-1.0.jar, mockito-all-1.8.5.jar define 30 overlapping 
> classes: 
> [WARNING]   - org.objenesis.ObjenesisBase
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiator
> [WARNING]   - org.objenesis.ObjenesisHelper
> [WARNING]   - org.objenesis.instantiator.jrockit.JRockitLegacyInstantiator
> [WARNING]   - org.objenesis.instantiator.sun.SunReflectionFactoryInstantiator
> [WARNING]   - org.objenesis.instantiator.ObjectInstantiator
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiatorBase$DummyStream
> [WARNING]   - org.objenesis.instantiator.basic.ObjectStreamClassInstantiator
> [WARNING]   - org.objenesis.ObjenesisException
> [WARNING]   - org.objenesis.Objenesis
> [WARNING]   - 20 more...
> [WARNING] maven-shade-plugin has detected that some class files are
> [WARNING] present in two or more JARs. When this happens, only one
> [WARNING] single version of the class is copied to the uber jar.
> [WARNING] Usually this is not harmful and you can skip these warnings,
> [WARNING] otherwise try to manually exclude artifacts based on
> [WARNING] mvn dependency:tree -Ddetail=true and the above output.
> [WARNING] See [http://maven.apache.org/plugins/maven-shade-plugin/]
> [INFO] Replacing original artifact with shaded artifact.
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15939) Filter overlapping objenesis class in hadoop-client-minicluster

2018-11-19 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691785#comment-16691785
 ] 

Sean Busbey commented on HADOOP-15939:
--

An initial question: Why is mockito-all being included in the client facing 
minicluster artifact? Do the minicluster classes really have a hard dependency 
on it or are we leaking something we use for internal testing?

> Filter overlapping objenesis class in hadoop-client-minicluster 
> 
>
> Key: HADOOP-15939
> URL: https://issues.apache.org/jira/browse/HADOOP-15939
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HADOOP-15939.001.patch
>
>
> As mentioned here and found in with latest Jenkins shadedclient.
> Jenkins does not provide a detailed output file for the failure though. But 
> it can be reproed with the following cmd:
> {code:java}
> mvn verify -fae --batch-mode -am -pl 
> hadoop-client-modules/hadoop-client-check-invariants -pl 
> hadoop-client-modules/hadoop-client-check-test-invariants -pl 
> hadoop-client-modules/hadoop-client-integration-tests -Dtest=NoUnitTests 
> -Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true
> {code}
> Error Message:
> {code:java}
> [WARNING] objenesis-1.0.jar, mockito-all-1.8.5.jar define 30 overlapping 
> classes: 
> [WARNING]   - org.objenesis.ObjenesisBase
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiator
> [WARNING]   - org.objenesis.ObjenesisHelper
> [WARNING]   - org.objenesis.instantiator.jrockit.JRockitLegacyInstantiator
> [WARNING]   - org.objenesis.instantiator.sun.SunReflectionFactoryInstantiator
> [WARNING]   - org.objenesis.instantiator.ObjectInstantiator
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiatorBase$DummyStream
> [WARNING]   - org.objenesis.instantiator.basic.ObjectStreamClassInstantiator
> [WARNING]   - org.objenesis.ObjenesisException
> [WARNING]   - org.objenesis.Objenesis
> [WARNING]   - 20 more...
> [WARNING] maven-shade-plugin has detected that some class files are
> [WARNING] present in two or more JARs. When this happens, only one
> [WARNING] single version of the class is copied to the uber jar.
> [WARNING] Usually this is not harmful and you can skip these warnings,
> [WARNING] otherwise try to manually exclude artifacts based on
> [WARNING] mvn dependency:tree -Ddetail=true and the above output.
> [WARNING] See [http://maven.apache.org/plugins/maven-shade-plugin/]
> [INFO] Replacing original artifact with shaded artifact.
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15939) Filter overlapping objenesis class in hadoop-client-minicluster

2018-11-19 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691778#comment-16691778
 ] 

Sean Busbey commented on HADOOP-15939:
--

I can put this in my review queue for Wednesday. is that fast enough?

> Filter overlapping objenesis class in hadoop-client-minicluster 
> 
>
> Key: HADOOP-15939
> URL: https://issues.apache.org/jira/browse/HADOOP-15939
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HADOOP-15939.001.patch
>
>
> As mentioned here and found in with latest Jenkins shadedclient.
> Jenkins does not provide a detailed output file for the failure though. But 
> it can be reproed with the following cmd:
> {code:java}
> mvn verify -fae --batch-mode -am -pl 
> hadoop-client-modules/hadoop-client-check-invariants -pl 
> hadoop-client-modules/hadoop-client-check-test-invariants -pl 
> hadoop-client-modules/hadoop-client-integration-tests -Dtest=NoUnitTests 
> -Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true
> {code}
> Error Message:
> {code:java}
> [WARNING] objenesis-1.0.jar, mockito-all-1.8.5.jar define 30 overlapping 
> classes: 
> [WARNING]   - org.objenesis.ObjenesisBase
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiator
> [WARNING]   - org.objenesis.ObjenesisHelper
> [WARNING]   - org.objenesis.instantiator.jrockit.JRockitLegacyInstantiator
> [WARNING]   - org.objenesis.instantiator.sun.SunReflectionFactoryInstantiator
> [WARNING]   - org.objenesis.instantiator.ObjectInstantiator
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiatorBase$DummyStream
> [WARNING]   - org.objenesis.instantiator.basic.ObjectStreamClassInstantiator
> [WARNING]   - org.objenesis.ObjenesisException
> [WARNING]   - org.objenesis.Objenesis
> [WARNING]   - 20 more...
> [WARNING] maven-shade-plugin has detected that some class files are
> [WARNING] present in two or more JARs. When this happens, only one
> [WARNING] single version of the class is copied to the uber jar.
> [WARNING] Usually this is not harmful and you can skip these warnings,
> [WARNING] otherwise try to manually exclude artifacts based on
> [WARNING] mvn dependency:tree -Ddetail=true and the above output.
> [WARNING] See [http://maven.apache.org/plugins/maven-shade-plugin/]
> [INFO] Replacing original artifact with shaded artifact.
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15878) website should have a list of CVEs w/impacted versions and guidance

2018-11-05 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675504#comment-16675504
 ] 

Sean Busbey commented on HADOOP-15878:
--

thanks for pushing this [~ajisakaa]!

> website should have a list of CVEs w/impacted versions and guidance
> ---
>
> Key: HADOOP-15878
> URL: https://issues.apache.org/jira/browse/HADOOP-15878
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation, website
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Attachments: HADOOP-15878.0.patch, HADOOP-15878.0.rendered.patch
>
>
> Our website should have a page with publicly disclosed CVEs listed. They 
> should include the community's understanding of impacted and fixed versions.
> For a simple example, see what kafka does:
> https://kafka.apache.org/cve-list



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13916) Document how downstream clients should make use of the new shaded client artifacts

2018-10-24 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662922#comment-16662922
 ] 

Sean Busbey commented on HADOOP-13916:
--

moved down from Critical to Major to reflect current prioritization.

> Document how downstream clients should make use of the new shaded client 
> artifacts
> --
>
> Key: HADOOP-13916
> URL: https://issues.apache.org/jira/browse/HADOOP-13916
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
>
> provide a quickstart that walks through using the new shaded dependencies 
> with Maven to create a simple downstream project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11656) Classpath isolation for downstream clients

2018-10-24 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662921#comment-16662921
 ] 

Sean Busbey commented on HADOOP-11656:
--

bq. Is there a user guide on how to use the new client for HDFS/YARN/MapReduce 
apps?

there is not yet. it's tracked in HADOOP-13916

> Classpath isolation for downstream clients
> --
>
> Key: HADOOP-11656
> URL: https://issues.apache.org/jira/browse/HADOOP-11656
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
>  Labels: classloading, classpath, dependencies, scripts, shell
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-11656_proposal.md
>
>
> Currently, Hadoop exposes downstream clients to a variety of third party 
> libraries. As our code base grows and matures we increase the set of 
> libraries we rely on. At the same time, as our user base grows we increase 
> the likelihood that some downstream project will run into a conflict while 
> attempting to use a different version of some library we depend on. This has 
> already happened with i.e. Guava several times for HBase, Accumulo, and Spark 
> (and I'm sure others).
> While YARN-286 and MAPREDUCE-1700 provided an initial effort, they default to 
> off and they don't do anything to help dependency conflicts on the driver 
> side or for folks talking to HDFS directly. This should serve as an umbrella 
> for changes needed to do things thoroughly on the next major version.
> We should ensure that downstream clients
> 1) can depend on a client artifact for each of HDFS, YARN, and MapReduce that 
> doesn't pull in any third party dependencies
> 2) only see our public API classes (or as close to this as feasible) when 
> executing user provided code, whether client side in a launcher/driver or on 
> the cluster in a container or within MR.
> This provides us with a double benefit: users get less grief when they want 
> to run substantially ahead or behind the versions we need and the project is 
> freer to change our own dependency versions because they'll no longer be in 
> our compatibility promises.
> Project specific task jiras to follow after I get some justifying use cases 
> written in the comments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13916) Document how downstream clients should make use of the new shaded client artifacts

2018-10-24 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-13916:
-
Priority: Major  (was: Critical)

> Document how downstream clients should make use of the new shaded client 
> artifacts
> --
>
> Key: HADOOP-13916
> URL: https://issues.apache.org/jira/browse/HADOOP-13916
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
>
> provide a quickstart that walks through using the new shaded dependencies 
> with Maven to create a simple downstream project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13916) Document how downstream clients should make use of the new shaded client artifacts

2018-10-24 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-13916:
-
Issue Type: Improvement  (was: Bug)

> Document how downstream clients should make use of the new shaded client 
> artifacts
> --
>
> Key: HADOOP-13916
> URL: https://issues.apache.org/jira/browse/HADOOP-13916
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
>
> provide a quickstart that walks through using the new shaded dependencies 
> with Maven to create a simple downstream project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns

2018-10-24 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662861#comment-16662861
 ] 

Sean Busbey commented on HADOOP-15815:
--

sounds good to me. anything in release notes for maven shaded plugin versions 
we'll pass that looks like it'll need investigating?

> Upgrade Eclipse Jetty version due to security concerns
> --
>
> Key: HADOOP-15815
> URL: https://issues.apache.org/jira/browse/HADOOP-15815
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Boris Vulikh
>Assignee: Boris Vulikh
>Priority: Major
> Attachments: HADOOP-15815.01-2.patch
>
>
> * 
> [CVE-2017-7657|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7657]
>  * 
> [CVE-2017-7658|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7658]
>  * 
> [CVE-2017-7656|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7656]
>  * 
> [CVE-2018-12536|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12536]
> We should upgrade the dependency to version 9.3.24 or the latest, if possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns

2018-10-24 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662664#comment-16662664
 ] 

Sean Busbey commented on HADOOP-15815:
--

here's the shaded client failure log:

 

[https://builds.apache.org/job/PreCommit-HADOOP-Build/15385/artifact/out/patch-shadedclient.txt/*view*/]

 

here's the relevant bit pulled out in case that build gets eaten by the history 
monster before things can be addressed:

 
{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-shade-plugin:2.4.3:shade (default) on project 
hadoop-client-minicluster: Error creating shaded jar: null: 
IllegalArgumentException -> [Help 1]
 {code}

Does the dependency update include any pom dependencies?  this sounds like 
MSHADE-122

> Upgrade Eclipse Jetty version due to security concerns
> --
>
> Key: HADOOP-15815
> URL: https://issues.apache.org/jira/browse/HADOOP-15815
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Boris Vulikh
>Assignee: Boris Vulikh
>Priority: Major
> Attachments: HADOOP-15815.01-2.patch
>
>
> * 
> [CVE-2017-7657|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7657]
>  * 
> [CVE-2017-7658|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7658]
>  * 
> [CVE-2017-7656|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7656]
>  * 
> [CVE-2018-12536|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12536]
> We should upgrade the dependency to version 9.3.24 or the latest, if possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15878) website should have a list of CVEs w/impacted versions and guidance

2018-10-24 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-15878:
-
Status: Patch Available  (was: In Progress)

QABot will fail, since it doesn't understand the hadoop-site repo.

> website should have a list of CVEs w/impacted versions and guidance
> ---
>
> Key: HADOOP-15878
> URL: https://issues.apache.org/jira/browse/HADOOP-15878
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Attachments: HADOOP-15878.0.patch, HADOOP-15878.0.rendered.patch
>
>
> Our website should have a page with publicly disclosed CVEs listed. They 
> should include the community's understanding of impacted and fixed versions.
> For a simple example, see what kafka does:
> https://kafka.apache.org/cve-list



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15878) website should have a list of CVEs w/impacted versions and guidance

2018-10-24 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662603#comment-16662603
 ] 

Sean Busbey commented on HADOOP-15878:
--

- v0
  - adds new page for CVE List  under "community" section of navbar
  - adds entries for everything from the last ~12 months

- v0 rendered
  - same as above, but after running {{hugo}} to render

If there's a PMC member with better records on reported on dates, please let me 
know. These ones are what I could figure out from mailing lists.

> website should have a list of CVEs w/impacted versions and guidance
> ---
>
> Key: HADOOP-15878
> URL: https://issues.apache.org/jira/browse/HADOOP-15878
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Attachments: HADOOP-15878.0.patch, HADOOP-15878.0.rendered.patch
>
>
> Our website should have a page with publicly disclosed CVEs listed. They 
> should include the community's understanding of impacted and fixed versions.
> For a simple example, see what kafka does:
> https://kafka.apache.org/cve-list



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15878) website should have a list of CVEs w/impacted versions and guidance

2018-10-24 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662603#comment-16662603
 ] 

Sean Busbey edited comment on HADOOP-15878 at 10/24/18 5:39 PM:


-v0
  - adds new page for CVE List  under "community" section of navbar
  - adds entries for everything from the last ~12 months

-v0 rendered
  - same as above, but after running {{hugo}} to render

If there's a PMC member with better records on reported on dates, please let me 
know. These ones are what I could figure out from mailing lists.


was (Author: busbey):
- v0
  - adds new page for CVE List  under "community" section of navbar
  - adds entries for everything from the last ~12 months

- v0 rendered
  - same as above, but after running {{hugo}} to render

If there's a PMC member with better records on reported on dates, please let me 
know. These ones are what I could figure out from mailing lists.

> website should have a list of CVEs w/impacted versions and guidance
> ---
>
> Key: HADOOP-15878
> URL: https://issues.apache.org/jira/browse/HADOOP-15878
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Attachments: HADOOP-15878.0.patch, HADOOP-15878.0.rendered.patch
>
>
> Our website should have a page with publicly disclosed CVEs listed. They 
> should include the community's understanding of impacted and fixed versions.
> For a simple example, see what kafka does:
> https://kafka.apache.org/cve-list



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15878) website should have a list of CVEs w/impacted versions and guidance

2018-10-24 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-15878:
-
Attachment: HADOOP-15878.0.rendered.patch
HADOOP-15878.0.patch

> website should have a list of CVEs w/impacted versions and guidance
> ---
>
> Key: HADOOP-15878
> URL: https://issues.apache.org/jira/browse/HADOOP-15878
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Attachments: HADOOP-15878.0.patch, HADOOP-15878.0.rendered.patch
>
>
> Our website should have a page with publicly disclosed CVEs listed. They 
> should include the community's understanding of impacted and fixed versions.
> For a simple example, see what kafka does:
> https://kafka.apache.org/cve-list



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-15878) website should have a list of CVEs w/impacted versions and guidance

2018-10-24 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-15878 started by Sean Busbey.

> website should have a list of CVEs w/impacted versions and guidance
> ---
>
> Key: HADOOP-15878
> URL: https://issues.apache.org/jira/browse/HADOOP-15878
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
>
> Our website should have a page with publicly disclosed CVEs listed. They 
> should include the community's understanding of impacted and fixed versions.
> For a simple example, see what kafka does:
> https://kafka.apache.org/cve-list



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15878) website should have a list of CVEs w/impacted versions and guidance

2018-10-24 Thread Sean Busbey (JIRA)
Sean Busbey created HADOOP-15878:


 Summary: website should have a list of CVEs w/impacted versions 
and guidance
 Key: HADOOP-15878
 URL: https://issues.apache.org/jira/browse/HADOOP-15878
 Project: Hadoop Common
  Issue Type: Task
  Components: documentation
Reporter: Sean Busbey
Assignee: Sean Busbey


Our website should have a page with publicly disclosed CVEs listed. They should 
include the community's understanding of impacted and fixed versions.

For a simple example, see what kafka does:

https://kafka.apache.org/cve-list



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0

2018-10-19 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657436#comment-16657436
 ] 

Sean Busbey commented on HADOOP-15850:
--

This seems more severe than "Major". Am I correct that this impacts downstream 
users of DistCp beyond HBase?

> CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0
> --
>
> Key: HADOOP-15850
> URL: https://issues.apache.org/jira/browse/HADOOP-15850
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 3.0.4, 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15850.branch-3.0.patch, HADOOP-15850.v2.patch, 
> HADOOP-15850.v3.patch, HADOOP-15850.v4.patch, HADOOP-15850.v5.patch, 
> HADOOP-15850.v6.patch, testIncrementalBackupWithBulkLoad-output.txt
>
>
> I was investigating test failure of TestIncrementalBackupWithBulkLoad from 
> hbase against hadoop 3.1.1
> hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
> {code}
> LOG.debug("creating input listing " + listing + " , totalRecords=" + 
> totalRecords);
> cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
> cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, 
> totalRecords);
> {code}
> For the test case, two bulk loaded hfiles are in the listing:
> {code}
> 2018-10-13 14:09:24,123 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for 
> 2 files of 10242
> {code}
> Later on, CopyCommitter#concatFileChunks would throw the following exception:
> {code}
> 2018-10-13 14:09:25,351 WARN  [Thread-936] mapred.LocalJobRunner$Job(590): 
> job_local1795473782_0004
> java.io.IOException: Inconsistent sequence file: current chunk file 
> org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
>
> 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
>  length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
> org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
>
> 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
>  length = 5142 aclEntries = null, xAttrs = null}
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
>   at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
> {code}
> The above warning shouldn't happen - the two bulk loaded hfiles are 
> independent.
> From the contents of the two CopyListingFileStatus instances, we can see that 
> their isSplit() return false. Otherwise the following from toString should be 
> logged:
> {code}
> if (isSplit()) {
>   sb.append(", chunkOffset = ").append(this.getChunkOffset());
>   sb.append(", chunkLength = ").append(this.getChunkLength());
> }
> {code}
> From hbase side, we can specify one bulk loaded hfile per job but that 
> defeats the purpose of using DistCp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13951) Precommit builds do not adequately protect against test malformed fs permissions.

2018-09-18 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619123#comment-16619123
 ] 

Sean Busbey commented on HADOOP-13951:
--

the lack of assignee means that most likely no one is looking at this problem.

> Precommit builds do not adequately protect against test malformed fs 
> permissions.
> -
>
> Key: HADOOP-13951
> URL: https://issues.apache.org/jira/browse/HADOOP-13951
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Sean Busbey
>Priority: Critical
>
> Right now this is expressed as failed Precommit-YARN-build jobs when they run 
> on H5 / H6 (see INFRA-13148), but the problem exists for all of the hadoop 
> related precommit jobs.
> The issue is that we have some tests in Common (and maybe HDFS) that 
> purposefully set permissions within the {{target/}} directory to simulate a 
> failure to interact with underlying fs data. The test sets some 
> subdirectories to have permissions such that we can no longer delete their 
> contents.
> Right now our precommit jobs include a step post-yetus-test-patch that 
> traverses the target directories and ensures that all subdirectories are 
> modifiable:
> {code}
> find ${WORKSPACE} -name target | xargs chmod -R u+w
> {code}
> Unfortunately, if we don't get to that line (say due to an aborted build, or 
> if the call to yetus test-patch exceeds the job timeout), then we are left in 
> a state where there are still subdirectories that can't be modified 
> (including deleted).
> Our builds also currently attempt to run a {{git clean}} at the very start of 
> the build after the repo is updated. If we have one of the aforementioned 
> timeouts that leaves a can't-be-deleted test directory, then all future 
> builds on that machine will fail attempting to run the {{git clean}} command.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15587) Securing ASF Hadoop releases out of the box

2018-07-09 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-15587:
-
Component/s: security

> Securing ASF Hadoop releases out of the box
> ---
>
> Key: HADOOP-15587
> URL: https://issues.apache.org/jira/browse/HADOOP-15587
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: build, common, documentation, security
>Reporter: Eric Yang
>Priority: Major
>
> [Mail 
> thread|http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201807.mbox/%3cdc06cefa-fe2b-4ca3-b9a9-1d6df0421...@hortonworks.com%3E]
>  started by Steve Loughran on the mailing lists to change default Hadoop 
> release to be more secure, a list of improvements to include:
>  # Change default proxy acl settings to non-routable IPs.
>  # Implement proxy acl check for HTTP protocol.
>  # Change yarn.admin.acl setting to be more restricted.
>  # Review settings that need to be lock down by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15051) FSDataOutputStream returned by LocalFileSystem#createNonRecursive doesn't have hflush capability

2017-11-17 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-15051:
-
Component/s: fs

> FSDataOutputStream returned by LocalFileSystem#createNonRecursive doesn't 
> have hflush capability
> 
>
> Key: HADOOP-15051
> URL: https://issues.apache.org/jira/browse/HADOOP-15051
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 3.0.0-beta1
>Reporter: Ted Yu
>
> See HBASE-19289 for background information.
> Here is related hbase code (fs is instance of LocalFileSystem):
> {code}
> this.output = fs.createNonRecursive(path, overwritable, bufferSize, 
> replication, blockSize,
>   null);
> // TODO Be sure to add a check for hsync if this branch includes 
> HBASE-19024
> if (!(CommonFSUtils.hasCapability(output, "hflush"))) {
>   throw new StreamLacksCapabilityException("hflush");
> {code}
> StreamCapabilities is used to poll "hflush" capability.
> [~busbey] suggested fixing this in hadoop.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15051) FSDataOutputStream returned by LocalFileSystem#createNonRecursive doesn't have hflush capability

2017-11-17 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-15051:
-
Issue Type: New Feature  (was: Bug)

> FSDataOutputStream returned by LocalFileSystem#createNonRecursive doesn't 
> have hflush capability
> 
>
> Key: HADOOP-15051
> URL: https://issues.apache.org/jira/browse/HADOOP-15051
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 3.0.0-beta1
>Reporter: Ted Yu
>
> See HBASE-19289 for background information.
> Here is related hbase code (fs is instance of LocalFileSystem):
> {code}
> this.output = fs.createNonRecursive(path, overwritable, bufferSize, 
> replication, blockSize,
>   null);
> // TODO Be sure to add a check for hsync if this branch includes 
> HBASE-19024
> if (!(CommonFSUtils.hasCapability(output, "hflush"))) {
>   throw new StreamLacksCapabilityException("hflush");
> {code}
> StreamCapabilities is used to poll "hflush" capability.
> [~busbey] suggested fixing this in hadoop.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14014) Shading runs on mvn deploy

2017-11-17 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257072#comment-16257072
 ] 

Sean Busbey commented on HADOOP-14014:
--

I believe this is intended behavior.

> Shading runs on mvn deploy
> --
>
> Key: HADOOP-14014
> URL: https://issues.apache.org/jira/browse/HADOOP-14014
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>
> I'm running "mvn deploy -DskipTests" and see that there is shading happening 
> in the build output. This seems like a bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14998) Make AuthenticationFilter @Public

2017-10-31 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227451#comment-16227451
 ] 

Sean Busbey commented on HADOOP-14998:
--

Could we move these classes out of the main Hadoop artifacts? Either in Apache 
Commons or as a stand-alone library we publish?

> Make AuthenticationFilter @Public
> -
>
> Key: HADOOP-14998
> URL: https://issues.apache.org/jira/browse/HADOOP-14998
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Robert Kanter
>Assignee: Bharat Viswanadham
>
> {{org.apache.hadoop.security.authentication.server.AuthenticationFilter}} is 
> currently marked as {{\@Private}} and {{\@Unstable}}.  
> {code:java}
> @InterfaceAudience.Private
> @InterfaceStability.Unstable
> public class AuthenticationFilter implements Filter {
> {code}
> However, many other projects (e.g. Oozie, Hive, Solr, HBase, etc) have been 
> using it for quite some time without having any compatibility issues AFAIK.  
> It doesn't seem to have had any breaking changes in quite some time.  On top 
> of that, it implements {{javax.servlet.Filter}}, so it can't change too 
> widely anyway.  {{AuthenticationFilter}} provides a lot of useful code for 
> dealing with tokens, Kerberos, etc, and we should encourage related projects 
> to re-use this code instead of rolling their own.
> I propose we change it to {{\@Public}} and {{\@Evolving}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14998) Make AuthenticationFilter @Public

2017-10-31 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227307#comment-16227307
 ] 

Sean Busbey commented on HADOOP-14998:
--

Could this wait until later? We already know that it relies on non-Hadoop APIs 
(at least javax.servlet.Filter), something we know is a problem in our API and 
we've only done a bit to try to fix thus far. The space between beta and GA 
seems like a poor time to make the problem worse by promising downstreamers 
we'll start supporting their use of our heretofore internals.

> Make AuthenticationFilter @Public
> -
>
> Key: HADOOP-14998
> URL: https://issues.apache.org/jira/browse/HADOOP-14998
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Robert Kanter
>Assignee: Bharat Viswanadham
>
> {{org.apache.hadoop.security.authentication.server.AuthenticationFilter}} is 
> currently marked as {{\@Private}} and {{\@Unstable}}.  
> {code:java}
> @InterfaceAudience.Private
> @InterfaceStability.Unstable
> public class AuthenticationFilter implements Filter {
> {code}
> However, many other projects (e.g. Oozie, Hive, Solr, HBase, etc) have been 
> using it for quite some time without having any compatibility issues AFAIK.  
> It doesn't seem to have had any breaking changes in quite some time.  On top 
> of that, it implements {{javax.servlet.Filter}}, so it can't change too 
> widely anyway.  {{AuthenticationFilter}} provides a lot of useful code for 
> dealing with tokens, Kerberos, etc, and we should encourage related projects 
> to re-use this code instead of rolling their own.
> I propose we change it to {{\@Public}} and {{\@Evolving}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12956) Inevitable Log4j2 migration via slf4j

2017-10-30 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224915#comment-16224915
 ] 

Sean Busbey commented on HADOOP-12956:
--

{quote}
The only thing that supports the log4j 1 properties files is Log4j 1.x. That 
was declared EOL 2 years ago. The last release of Log4j 1 was 5 1/2 years ago. 
It doesn't run in Java 9 without hacking it.
{quote}

We're aware of the limitations of log4j 1. The burden on our operators for 
changing something as fundamental as logging is still something the project 
cares about. I'd be surprised if Hadoop took a hard look at Java 9 before late 
2018.

{quote}
At some point you are going to have to get off of Log4j 1.

The log4j team started an effort to create a properties file converter but it 
would only be able to convert Appenders & Layouts that are part of Log4j 1 
itself. That is working to some degree but is still considered experimental. 
Any user created Appenders and Layouts would not be able to be migrated. As we 
would not be able to convert them to a Log4j 2 plugin.

That said, we welcome any ideas or contributions anyone wants to contribute to 
make the migration easier.
{quote}

I get that it's frustrating to have folks not migrating.  I'm a maintainer on a 
project that went through a major version change that didn't work well for 
operators (HBase in our 0.94 to 0.96 Event Horizon). The task was miserable for 
downstream folks as well as those on the project. That was just over 4 years 
ago and there are still folks running HBase 0.94.

Frankly, it'd be very helpful for the Log4j community to state plainly and 
directly wether or not support for log4j 1 properties files will ever happen. 
We (the hadoop project as well as some other communities I watch) have gotten a 
mishmash of responses about it being in progress vs not feasible. A hard stance 
of "not happening" makes it easier for communities to plan their limited 
attention.

{quote}
I should also point out, SLF4J isn't really an answer for this problem either 
as Logback doesn't support Log4j 1 configurations and its migration tool can't 
handle custom Appenders or Layouts either.
{quote}

SLF4j is exactly the operational answer Hadoop needs. It lets us move our 
code's assumptions off of log4j1 while providing a log4j 1 bridge that will 
work with existing log4j 1 properties files. That way we can work incrementally 
on updating the code base while not requiring operators to change anything.  
Once we're done, operators who want to switch early can do so. As a project we 
can wait for our next major version to move the default to some other logging 
implementation.

> Inevitable Log4j2 migration via slf4j
> -
>
> Key: HADOOP-12956
> URL: https://issues.apache.org/jira/browse/HADOOP-12956
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Gopal V
>Assignee: Haohui Mai
>
> {{5 August 2015 --The Apache Logging Services™ Project Management Committee 
> (PMC) has announced that the Log4j™ 1.x logging framework has reached its end 
> of life (EOL) and is no longer officially supported.}}
> https://blogs.apache.org/foundation/entry/apache_logging_services_project_announces
> A whole framework log4j2 upgrade has to be synchronized, partly for improved 
> performance brought about by log4j2.
> https://logging.apache.org/log4j/2.x/manual/async.html#Performance



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12956) Inevitable Log4j2 migration via slf4j

2017-10-29 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224226#comment-16224226
 ] 

Sean Busbey commented on HADOOP-12956:
--

We need to keep working with runtime deployments that rely on the log4j v1 
properties files. To date that hasn't been possible with Log4j v2 as far as I 
know.

> Inevitable Log4j2 migration via slf4j
> -
>
> Key: HADOOP-12956
> URL: https://issues.apache.org/jira/browse/HADOOP-12956
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Gopal V
>Assignee: Haohui Mai
>
> {{5 August 2015 --The Apache Logging Services™ Project Management Committee 
> (PMC) has announced that the Log4j™ 1.x logging framework has reached its end 
> of life (EOL) and is no longer officially supported.}}
> https://blogs.apache.org/foundation/entry/apache_logging_services_project_announces
> A whole framework log4j2 upgrade has to be synchronized, partly for improved 
> performance brought about by log4j2.
> https://logging.apache.org/log4j/2.x/manual/async.html#Performance



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14636) TestKDiag failing intermittently on Jenkins/Yetus at login from keytab

2017-10-26 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16220756#comment-16220756
 ] 

Sean Busbey commented on HADOOP-14636:
--

I think we already have a profile that's for when yetus runs. the yetus 
personality for testing hadoop always activates the {{test-patch}} profile when 
it runs maven commands.

AFAICT we currently only use it for setting some java options: 
https://github.com/apache/hadoop/blob/625039ef20e6011ab360131d70582a6e4bf2ec1d/hadoop-project/pom.xml#L1688

> TestKDiag failing intermittently on Jenkins/Yetus at login from keytab
> --
>
> Key: HADOOP-14636
> URL: https://issues.apache.org/jira/browse/HADOOP-14636
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 3.0.0-beta1
> Environment: {code}
> user.name = "jenkins"
> java.version = "1.8.0_131"
> java.security.krb5.conf = 
> "/testptch/hadoop/hadoop-common-project/hadoop-common/target/1499472499650/krb5.conf"
> kdc.resource.dir = "src/test/resources/kdc"
> hadoop.kerberos.kinit.command = "kinit"
> hadoop.security.authentication = "KERBEROS"
> hadoop.security.authorization = "false"
> hadoop.kerberos.min.seconds.before.relogin = "60"
> hadoop.security.dns.interface = "(unset)"
> hadoop.security.dns.nameserver = "(unset)"
> hadoop.rpc.protection = "authentication"
> hadoop.security.saslproperties.resolver.class = "(unset)"
> hadoop.security.crypto.codec.classes = "(unset)"
> hadoop.security.group.mapping = 
> "org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback"
> hadoop.security.impersonation.provider.class = "(unset)"
> dfs.data.transfer.protection = "(unset)"
> dfs.data.transfer.saslproperties.resolver.class = "(unset)"
> 2017-07-08 00:08:20,381 WARN  security.KDiag (KDiag.java:execute(365)) - The 
> default cluster security is insecure
> {code}
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: output.txt
>
>
> The test {{TestKDiag}} is failing intermittently on Yetus builds, 
> {code}
> org.apache.hadoop.security.KerberosAuthException: Login failure for user: 
> f...@example.com from keytab 
> /testptch/hadoop/hadoop-common-project/hadoop-common/target/keytab 
> javax.security.auth.login.LoginException: Unable to obtain password from user
> {code}
> The tests that fail are all trying to log in using a keytab just created, the 
> JVM isn't having any of it.
> Possible causes? I can think of a few to start with
> # keytab generation
> # keytab path parameter wrong
> # JVM isn't doing the login
> # some race condition
> # Host OS
> # Other environment issues (clock, network...)
> There's no recent changes in the kdiag or UGI code.
> The failure is intermittent, not surfacing for me (others?) locally, which 
> which could point at: JVM, host OS, race condition, other env  issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14952) Catalina use of hadoop-client throws ClassNotFoundException for jersey

2017-10-24 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217671#comment-16217671
 ] 

Sean Busbey commented on HADOOP-14952:
--

bump? [~eximius] could you add the requested info?

> Catalina use of hadoop-client throws ClassNotFoundException for jersey 
> ---
>
> Key: HADOOP-14952
> URL: https://issues.apache.org/jira/browse/HADOOP-14952
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Kamil
>
> I was using org.apache.hadoop:hadoop-client in version 2.7.4 and it worked 
> fine, but recently had problems with CGLIB (was conflicting with Spring).
> I decided to try version 3.0.0-beta1 but server didn't start with exception:
> {code}
> 16-Oct-2017 10:27:12.918 SEVERE [localhost-startStop-1] 
> org.apache.catalina.core.ContainerBase.addChildInternal 
> ContainerBase.addChild: start:
>  org.apache.catalina.LifecycleException: Failed to start component 
> [StandardEngine[Catalina].StandardHost[localhost].StandardContext[]]
> at 
> org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:158)
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:724)
> at 
> org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:700)
> at 
> org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734)
> at 
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1107)
> at 
> org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1841)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NoClassDefFoundError: 
> com/sun/jersey/api/core/DefaultResourceConfig
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.catalina.startup.WebappServiceLoader.loadServices(WebappServiceLoader.java:188)
> at 
> org.apache.catalina.startup.WebappServiceLoader.load(WebappServiceLoader.java:159)
> at 
> org.apache.catalina.startup.ContextConfig.processServletContainerInitializers(ContextConfig.java:1611)
> at 
> org.apache.catalina.startup.ContextConfig.webConfig(ContextConfig.java:1131)
> at 
> org.apache.catalina.startup.ContextConfig.configureStart(ContextConfig.java:771)
> at 
> org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextConfig.java:298)
> at 
> org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:94)
> at 
> org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5092)
> at 
> org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:152)
> ... 10 more
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.jersey.api.core.DefaultResourceConfig
> at 
> org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1299)
> at 
> org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1133)
> ... 21 more
> {code}
> after adding com.sun.jersey:jersey-server:1.9.1 to my dependencies the server 
> started, but I think that it should be already included in your dependencies



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11981) Add storage policy APIs to filesystem docs

2017-10-23 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-11981:
-
Labels:   (was: newbie)

> Add storage policy APIs to filesystem docs
> --
>
> Key: HADOOP-11981
> URL: https://issues.apache.org/jira/browse/HADOOP-11981
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-11981.incomplete.01.patch
>
>
> HDFS-8345 exposed the storage policy APIs via the FileSystem.
> The FileSystem docs should be updated accordingly.
> https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13916) Document how downstream clients should make use of the new shaded client artifacts

2017-10-20 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16213680#comment-16213680
 ] 

Sean Busbey commented on HADOOP-13916:
--

HBase has been a bit on fire lately due to a confluence of failures in our 
testing stuff, so I haven't been keeping this on mind. Let me check my schedule 
and see if this is doable.

> Document how downstream clients should make use of the new shaded client 
> artifacts
> --
>
> Key: HADOOP-13916
> URL: https://issues.apache.org/jira/browse/HADOOP-13916
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
>
> provide a quickstart that walks through using the new shaded dependencies 
> with Maven to create a simple downstream project.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HADOOP-13916) Document how downstream clients should make use of the new shaded client artifacts

2017-10-20 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-13916 stopped by Sean Busbey.

> Document how downstream clients should make use of the new shaded client 
> artifacts
> --
>
> Key: HADOOP-13916
> URL: https://issues.apache.org/jira/browse/HADOOP-13916
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
>
> provide a quickstart that walks through using the new shaded dependencies 
> with Maven to create a simple downstream project.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14238) [Umbrella] Rechecking Guava's object is not exposed to user-facing API

2017-10-17 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16208147#comment-16208147
 ] 

Sean Busbey commented on HADOOP-14238:
--

apilyzer should work with our annotations now. Folks have a preferred way to 
get an initial report? I could hook it into Yetus, or I could just run it 
manually and put up a report output?

> [Umbrella] Rechecking Guava's object is not exposed to user-facing API
> --
>
> Key: HADOOP-14238
> URL: https://issues.apache.org/jira/browse/HADOOP-14238
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>Priority: Critical
>
> This is reported by [~hitesh] on HADOOP-10101.
> At least, AMRMClient#waitFor takes Guava's Supplier instance as an instance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14952) Catalina use of hadoop-client throws ClassNotFoundException for jersey

2017-10-17 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16208108#comment-16208108
 ] 

Sean Busbey commented on HADOOP-14952:
--

please include a code snippet so I can attempt to reproduce the problem.

> Catalina use of hadoop-client throws ClassNotFoundException for jersey 
> ---
>
> Key: HADOOP-14952
> URL: https://issues.apache.org/jira/browse/HADOOP-14952
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Kamil
>
> I was using org.apache.hadoop:hadoop-client in version 2.7.4 and it worked 
> fine, but recently had problems with CGLIB (was conflicting with Spring).
> I decided to try version 3.0.0-beta1 but server didn't start with exception:
> {code}
> 16-Oct-2017 10:27:12.918 SEVERE [localhost-startStop-1] 
> org.apache.catalina.core.ContainerBase.addChildInternal 
> ContainerBase.addChild: start:
>  org.apache.catalina.LifecycleException: Failed to start component 
> [StandardEngine[Catalina].StandardHost[localhost].StandardContext[]]
> at 
> org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:158)
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:724)
> at 
> org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:700)
> at 
> org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734)
> at 
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1107)
> at 
> org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1841)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NoClassDefFoundError: 
> com/sun/jersey/api/core/DefaultResourceConfig
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.catalina.startup.WebappServiceLoader.loadServices(WebappServiceLoader.java:188)
> at 
> org.apache.catalina.startup.WebappServiceLoader.load(WebappServiceLoader.java:159)
> at 
> org.apache.catalina.startup.ContextConfig.processServletContainerInitializers(ContextConfig.java:1611)
> at 
> org.apache.catalina.startup.ContextConfig.webConfig(ContextConfig.java:1131)
> at 
> org.apache.catalina.startup.ContextConfig.configureStart(ContextConfig.java:771)
> at 
> org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextConfig.java:298)
> at 
> org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:94)
> at 
> org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5092)
> at 
> org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:152)
> ... 10 more
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.jersey.api.core.DefaultResourceConfig
> at 
> org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1299)
> at 
> org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1133)
> ... 21 more
> {code}
> after adding com.sun.jersey:jersey-server:1.9.1 to my dependencies the server 
> started, but I think that it should be already included in your dependencies



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13921) Remove Log4j classes from JobConf

2017-10-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16207025#comment-16207025
 ] 

Sean Busbey commented on HADOOP-13921:
--

How can we update the description in the release notes (ie. for 
[3.0.0-alpha4|http://hadoop.apache.org/docs/r3.0.0-alpha4/hadoop-project-dist/hadoop-common/release/3.0.0-alpha4/RELEASENOTES.3.0.0-alpha4.html])
 to make this change easier to spot for downstream folks?

It's not obvious from TEZ-3853 which version of Hadoop 3 you first attempted to 
update to. Would calling out the earlier alpha/beta release notes have made it 
easier to have a heads up?

> Remove Log4j classes from JobConf
> -
>
> Key: HADOOP-13921
> URL: https://issues.apache.org/jira/browse/HADOOP-13921
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0-alpha4
>
> Attachments: HADOOP-13921.0.patch, HADOOP-13921.1.patch
>
>
> Replace the use of log4j classes from JobConf so that the dependency is not 
> needed unless folks are making use of our custom log4j appenders or loading a 
> logging bridge to use that system.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14952) Catalina use of hadoop-client throws ClassNotFoundException for jersey

2017-10-16 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-14952:
-
Summary: Catalina use of hadoop-client throws ClassNotFoundException for 
jersey   (was: Newest hadoop-client throws ClassNotFoundException)

> Catalina use of hadoop-client throws ClassNotFoundException for jersey 
> ---
>
> Key: HADOOP-14952
> URL: https://issues.apache.org/jira/browse/HADOOP-14952
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Kamil
>
> I was using org.apache.hadoop:hadoop-client in version 2.7.4 and it worked 
> fine, but recently had problems with CGLIB (was conflicting with Spring).
> I decided to try version 3.0.0-beta1 but server didn't start with exception:
> {code}
> 16-Oct-2017 10:27:12.918 SEVERE [localhost-startStop-1] 
> org.apache.catalina.core.ContainerBase.addChildInternal 
> ContainerBase.addChild: start:
>  org.apache.catalina.LifecycleException: Failed to start component 
> [StandardEngine[Catalina].StandardHost[localhost].StandardContext[]]
> at 
> org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:158)
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:724)
> at 
> org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:700)
> at 
> org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734)
> at 
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1107)
> at 
> org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1841)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NoClassDefFoundError: 
> com/sun/jersey/api/core/DefaultResourceConfig
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.catalina.startup.WebappServiceLoader.loadServices(WebappServiceLoader.java:188)
> at 
> org.apache.catalina.startup.WebappServiceLoader.load(WebappServiceLoader.java:159)
> at 
> org.apache.catalina.startup.ContextConfig.processServletContainerInitializers(ContextConfig.java:1611)
> at 
> org.apache.catalina.startup.ContextConfig.webConfig(ContextConfig.java:1131)
> at 
> org.apache.catalina.startup.ContextConfig.configureStart(ContextConfig.java:771)
> at 
> org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextConfig.java:298)
> at 
> org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:94)
> at 
> org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5092)
> at 
> org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:152)
> ... 10 more
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.jersey.api.core.DefaultResourceConfig
> at 
> org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1299)
> at 
> org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1133)
> ... 21 more
> {code}
> after adding com.sun.jersey:jersey-server:1.9.1 to my dependencies the server 
> started, but I think that it should be already included in your dependencies



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14952) Newest hadoop-client throws ClassNotFoundException

2017-10-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206162#comment-16206162
 ] 

Sean Busbey commented on HADOOP-14952:
--

Are you using Jersey in your own app? I don't see anything in that stacktrace 
to indicate Hadoop is requesting the class?

> Newest hadoop-client throws ClassNotFoundException
> --
>
> Key: HADOOP-14952
> URL: https://issues.apache.org/jira/browse/HADOOP-14952
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Kamil
>
> I was using org.apache.hadoop:hadoop-client in version 2.7.4 and it worked 
> fine, but recently had problems with CGLIB (was conflicting with Spring).
> I decided to try version 3.0.0-beta1 but server didn't start with exception:
> {code}
> 16-Oct-2017 10:27:12.918 SEVERE [localhost-startStop-1] 
> org.apache.catalina.core.ContainerBase.addChildInternal 
> ContainerBase.addChild: start:
>  org.apache.catalina.LifecycleException: Failed to start component 
> [StandardEngine[Catalina].StandardHost[localhost].StandardContext[]]
> at 
> org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:158)
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:724)
> at 
> org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:700)
> at 
> org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734)
> at 
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1107)
> at 
> org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1841)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NoClassDefFoundError: 
> com/sun/jersey/api/core/DefaultResourceConfig
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.catalina.startup.WebappServiceLoader.loadServices(WebappServiceLoader.java:188)
> at 
> org.apache.catalina.startup.WebappServiceLoader.load(WebappServiceLoader.java:159)
> at 
> org.apache.catalina.startup.ContextConfig.processServletContainerInitializers(ContextConfig.java:1611)
> at 
> org.apache.catalina.startup.ContextConfig.webConfig(ContextConfig.java:1131)
> at 
> org.apache.catalina.startup.ContextConfig.configureStart(ContextConfig.java:771)
> at 
> org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextConfig.java:298)
> at 
> org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:94)
> at 
> org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5092)
> at 
> org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:152)
> ... 10 more
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.jersey.api.core.DefaultResourceConfig
> at 
> org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1299)
> at 
> org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1133)
> ... 21 more
> {code}
> after adding com.sun.jersey:jersey-server:1.9.1 to my dependencies the server 
> started, but I think that it should be already included in your dependencies



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14952) Newest hadoop-client throws ClassNotFoundException

2017-10-16 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-14952:
-
Affects Version/s: 3.0.0-beta1

> Newest hadoop-client throws ClassNotFoundException
> --
>
> Key: HADOOP-14952
> URL: https://issues.apache.org/jira/browse/HADOOP-14952
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Kamil
>
> I was using org.apache.hadoop:hadoop-client in version 2.7.4 and it worked 
> fine, but recently had problems with CGLIB (was conflicting with Spring).
> I decided to try version 3.0.0-beta1 but server didn't start with exception:
> {code}
> 16-Oct-2017 10:27:12.918 SEVERE [localhost-startStop-1] 
> org.apache.catalina.core.ContainerBase.addChildInternal 
> ContainerBase.addChild: start:
>  org.apache.catalina.LifecycleException: Failed to start component 
> [StandardEngine[Catalina].StandardHost[localhost].StandardContext[]]
> at 
> org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:158)
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:724)
> at 
> org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:700)
> at 
> org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734)
> at 
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1107)
> at 
> org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1841)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NoClassDefFoundError: 
> com/sun/jersey/api/core/DefaultResourceConfig
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.catalina.startup.WebappServiceLoader.loadServices(WebappServiceLoader.java:188)
> at 
> org.apache.catalina.startup.WebappServiceLoader.load(WebappServiceLoader.java:159)
> at 
> org.apache.catalina.startup.ContextConfig.processServletContainerInitializers(ContextConfig.java:1611)
> at 
> org.apache.catalina.startup.ContextConfig.webConfig(ContextConfig.java:1131)
> at 
> org.apache.catalina.startup.ContextConfig.configureStart(ContextConfig.java:771)
> at 
> org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextConfig.java:298)
> at 
> org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:94)
> at 
> org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5092)
> at 
> org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:152)
> ... 10 more
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.jersey.api.core.DefaultResourceConfig
> at 
> org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1299)
> at 
> org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1133)
> ... 21 more
> {code}
> after adding com.sun.jersey:jersey-server:1.9.1 to my dependencies the server 
> started, but I think that it should be already included in your dependencies



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x

2017-10-05 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193615#comment-16193615
 ] 

Sean Busbey commented on HADOOP-14178:
--

{quote}
Ted Yu
Can Hbase use hadoop shaded jars to avoid these kind of issue?
{quote}

Maybe in the future? Right now HBase's dependency on Hadoop is kind of messy 
for a few reasons.

# we have to keep working on top of Hadoop 2.x and Hadoop 3.x. We mostly have 
this abstracted.
# we have parts of HBase that make use of Hadoop internals such that we can't 
currently move all of the Hadoop-3 build version on to the client artifacts
# The parts of HBase that use Hadoop internals are currently all mixed up with 
parts that are proper downstream consumers, so we can't even e.g. isolate the 
problem parts and then avoid mockito there.

> Move Mockito up to version 2.x
> --
>
> Key: HADOOP-14178
> URL: https://issues.apache.org/jira/browse/HADOOP-14178
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Akira Ajisaka
>
> I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 
> since the switch to maven in 2011. 
> Mockito is now at version 2.1, [with lots of Java 8 
> support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. 
> That' s not just defining actions as closures, but in supporting Optional 
> types, mocking methods in interfaces, etc. 
> It's only used for testing, and, *provided there aren't regressions*, cost of 
> upgrade is low. The good news: test tools usually come with good test 
> coverage. The bad: mockito does go deep into java bytecodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13917) Ensure yetus personality runs the integration tests for the shaded client

2017-09-28 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184542#comment-16184542
 ] 

Sean Busbey commented on HADOOP-13917:
--

shaded client tests showed up in the [linux x86 QBT nightly run 
#541|https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-trunk-java8-linux-x86/541/artifact/out/console-report.html]

> Ensure yetus personality runs the integration tests for the shaded client
> -
>
> Key: HADOOP-13917
> URL: https://issues.apache.org/jira/browse/HADOOP-13917
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, test
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-13917.WIP.0.patch, HADOOP-14771.02.patch
>
>
> Either QBT or a different jenkins job should run our integration tests, 
> specifically the ones added for the shaded client.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13917) Ensure yetus personality runs the integration tests for the shaded client

2017-09-26 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-13917:
-
Summary: Ensure yetus personality runs the integration tests for the shaded 
client  (was: Ensure nightly builds run the integration tests for the shaded 
client)

> Ensure yetus personality runs the integration tests for the shaded client
> -
>
> Key: HADOOP-13917
> URL: https://issues.apache.org/jira/browse/HADOOP-13917
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, test
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-13917.WIP.0.patch, HADOOP-14771.02.patch
>
>
> Either QBT or a different jenkins job should run our integration tests, 
> specifically the ones added for the shaded client.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13917) Ensure nightly builds run the integration tests for the shaded client

2017-09-26 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-13917:
-
   Resolution: Fixed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

filed YETUS-550 to add the log

> Ensure nightly builds run the integration tests for the shaded client
> -
>
> Key: HADOOP-13917
> URL: https://issues.apache.org/jira/browse/HADOOP-13917
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, test
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-13917.WIP.0.patch, HADOOP-14771.02.patch
>
>
> Either QBT or a different jenkins job should run our integration tests, 
> specifically the ones added for the shaded client.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13917) Ensure nightly builds run the integration tests for the shaded client

2017-09-26 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181459#comment-16181459
 ] 

Sean Busbey commented on HADOOP-13917:
--

failed as expected. I also started a rerun of HADOOP-14771 to check the current 
patch and it passed as expected. think we're good?

> Ensure nightly builds run the integration tests for the shaded client
> -
>
> Key: HADOOP-13917
> URL: https://issues.apache.org/jira/browse/HADOOP-13917
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, test
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Attachments: HADOOP-13917.WIP.0.patch, HADOOP-14771.02.patch
>
>
> Either QBT or a different jenkins job should run our integration tests, 
> specifically the ones added for the shaded client.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13917) Ensure nightly builds run the integration tests for the shaded client

2017-09-26 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-13917:
-
Attachment: HADOOP-14771.02.patch

> Ensure nightly builds run the integration tests for the shaded client
> -
>
> Key: HADOOP-13917
> URL: https://issues.apache.org/jira/browse/HADOOP-13917
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, test
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Attachments: HADOOP-13917.WIP.0.patch, HADOOP-14771.02.patch
>
>
> Either QBT or a different jenkins job should run our integration tests, 
> specifically the ones added for the shaded client.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13917) Ensure nightly builds run the integration tests for the shaded client

2017-09-26 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181318#comment-16181318
 ] 

Sean Busbey commented on HADOOP-13917:
--

sure. lemme post that.

> Ensure nightly builds run the integration tests for the shaded client
> -
>
> Key: HADOOP-13917
> URL: https://issues.apache.org/jira/browse/HADOOP-13917
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, test
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Attachments: HADOOP-13917.WIP.0.patch
>
>
> Either QBT or a different jenkins job should run our integration tests, 
> specifically the ones added for the shaded client.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13917) Ensure nightly builds run the integration tests for the shaded client

2017-09-26 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180936#comment-16180936
 ] 

Sean Busbey commented on HADOOP-13917:
--

submitted new precommit and qbt runs now that the addendum for YETUS-543 has 
landed.

> Ensure nightly builds run the integration tests for the shaded client
> -
>
> Key: HADOOP-13917
> URL: https://issues.apache.org/jira/browse/HADOOP-13917
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, test
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Attachments: HADOOP-13917.WIP.0.patch
>
>
> Either QBT or a different jenkins job should run our integration tests, 
> specifically the ones added for the shaded client.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14771) hadoop-client does not include hadoop-yarn-client

2017-09-21 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174814#comment-16174814
 ] 

Sean Busbey commented on HADOOP-14771:
--

{quote}
 new patch works(tested locally) even if we remove 
hadoop-yarn-server-resourcemanager,hadoop-yarn-server-nodemanager from 
exclusions. Let me know if we should remove both of these from exclusions.
{quote}

Sounds good. If things work once those are included, let's remove the 
exclusions. We can always pare things down later if e.g. the jars are too big 
or we want to discourage some use case.

> hadoop-client does not include hadoop-yarn-client
> -
>
> Key: HADOOP-14771
> URL: https://issues.apache.org/jira/browse/HADOOP-14771
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Haibo Chen
>Assignee: Ajay Kumar
>Priority: Critical
> Attachments: HADOOP-14771.01.patch, HADOOP-14771.02.patch, 
> HADOOP-14771.03.patch
>
>
> The hadoop-client does not include hadoop-yarn-client, thus, the shared 
> hadoop-client is incomplete. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13916) Document how downstream clients should make use of the new shaded client artifacts

2017-09-19 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-13916:
-
Target Version/s: 3.0.0  (was: 3.0.0-beta1)

> Document how downstream clients should make use of the new shaded client 
> artifacts
> --
>
> Key: HADOOP-13916
> URL: https://issues.apache.org/jira/browse/HADOOP-13916
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
>
> provide a quickstart that walks through using the new shaded dependencies 
> with Maven to create a simple downstream project.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   3   4   5   6   7   8   9   10   >