[jira] [Created] (HADOOP-14844) Remove requirement to specify TenantGuid for MSI Token Provider
Atul Sikaria created HADOOP-14844: - Summary: Remove requirement to specify TenantGuid for MSI Token Provider Key: HADOOP-14844 URL: https://issues.apache.org/jira/browse/HADOOP-14844 Project: Hadoop Common Issue Type: Improvement Components: fs/adl Reporter: Atul Sikaria The MSI identity extension on Azure VMs has removed the need to specify the tenant guid as part of the request to retrieve token from MSI service on the local VM. This means the tenant guid configuration parameter is not needed anymore. This change removes the redundant configuration parameter. It also makes the port number optional - if not specified, then the default port is used by the ADLS SDK (happens to be 50342, but that is transparent to Hadoop code). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: [DISCUSS] Looking to Apache Hadoop 3.1 release
Hi Wangda - Thank you for starting this conversation! +1000 for a faster release cadence. Quicker releases make turning around security fixes so much easier. When we consider alpha features, let’s please ensure that they are not delivered in a state that has known security issues and also make sure that they are disabled by default. IMO - it is not a feature - alpha or otherwise - unless it has some reasonable assurance of being secure. Please don't see this as calling out any particular feature. I just think we need to be very explicit about security expectations. Maybe this is already well understood. Thank you for this proposed plan and for volunteering! —larry On Wed, Sep 6, 2017 at 7:22 PM, Anu Engineerwrote: > Hi Wangda, > > We are planning to start the Ozone merge discussion by the end of this > month. I am hopeful that it will be merged pretty soon after that. > Please add Ozone to the list of features that are being tracked for Apache > Hadoop 3.1. > > We would love to release Ozone as an alpha feature in Hadoop 3.1. > > Thanks > Anu > > > On 9/6/17, 2:26 PM, "Arun Suresh" wrote: > > >Thanks for starting this Wangda. > > > >I would also like to add: > >- YARN-5972: Support Pausing/Freezing of opportunistic containers > > > >Cheers > >-Arun > > > >On Wed, Sep 6, 2017 at 1:49 PM, Steve Loughran > >wrote: > > > >> > >> > On 6 Sep 2017, at 19:13, Wangda Tan wrote: > >> > > >> > Hi all, > >> > > >> > As we discussed on [1], there were proposals from Steve / Vinod etc to > >> have > >> > a faster cadence of releases and to start thinking of a Hadoop 3.1 > >> release > >> > earlier than March 2018 as is currently proposed. > >> > > >> > I think this is a good idea. I'd like to start the process sooner, and > >> > establish timeline etc so that we can be ready when 3.0.0 GA is out. > With > >> > this we can also establish faster cadence for future Hadoop 3.x > releases. > >> > > >> > To this end, I propose to target Hadoop 3.1.0 for a release by mid Jan > >> > 2018. (About 4.5 months from now and 2.5 months after 3.0-GA, instead > of > >> > 6.5 months from now). > >> > > >> > I'd also want to take this opportunity to come up with a more > elaborate > >> > release plan to avoid some of the confusion we had with 3.0 beta. > General > >> > proposal for the timeline (per this other proposal [2]) > >> > - Feature freeze date - all features should be merged by Dec 15, 2017. > >> > - Code freeze date - blockers/critical only, no more improvements and > non > >> > blocker/critical bug-fixes: Jan 1, 2018. > >> > - Release date: Jan 15, 2018 > >> > > >> > Following is a list of features on my radar which could be candidates > >> for a > >> > 3.1 release: > >> > - YARN-5734, Dynamic scheduler queue configuration. (Owner: Jonathan > >> Hung) > >> > - YARN-5881, Add absolute resource configuration to CapacityScheduler. > >> > (Owner: Sunil) > >> > - YARN-5673, Container-executor rewrite for better security, > >> extensibility > >> > and portability. (Owner Varun Vasudev) > >> > - YARN-6223, GPU isolation. (Owner: Wangda) > >> > > >> > And from email [3] mentioned by Andrew, there’re several other HDFS > >> > features want to be released with 3.1 as well, assuming they fit the > >> > timelines: > >> > - Storage Policy Satisfier > >> > - HDFS tiered storage > >> > > >> > Please let me know if I missed any features targeted to 3.1 per this > >> > timeline. > >> > >> > >> HADOOP-13786 : S3Guard committer, which also adds resilience to failures > >> talking to S3 (we barely have any today), > >> > >> > > >> > And I want to volunteer myself as release manager of 3.1.0 release. > >> Please > >> > let me know if you have any suggestions/concerns. > >> > >> well volunteered :) > >> > >> > > >> > Thanks, > >> > Wangda Tan > >> > > >> > [1] http://markmail.org/message/hwar5f5ap654ck5o?q= > >> > Branch+merges+and+3%2E0%2E0-beta1+scope > >> > [2] http://markmail.org/message/hwar5f5ap654ck5o?q=Branch+ > >> > merges+and+3%2E0%2E0-beta1+scope#query:Branch%20merges% > >> > 20and%203.0.0-beta1%20scope+page:1+mid:2hqqkhl2dymcikf5+state:results > >> > [3] http://markmail.org/message/h35obzqrh3ag6dgn?q=Branch+merge > >> > s+and+3%2E0%2E0-beta1+scope >
Re: [DISCUSS] Looking to Apache Hadoop 3.1 release
Hi Wangda, We are planning to start the Ozone merge discussion by the end of this month. I am hopeful that it will be merged pretty soon after that. Please add Ozone to the list of features that are being tracked for Apache Hadoop 3.1. We would love to release Ozone as an alpha feature in Hadoop 3.1. Thanks Anu On 9/6/17, 2:26 PM, "Arun Suresh"wrote: >Thanks for starting this Wangda. > >I would also like to add: >- YARN-5972: Support Pausing/Freezing of opportunistic containers > >Cheers >-Arun > >On Wed, Sep 6, 2017 at 1:49 PM, Steve Loughran >wrote: > >> >> > On 6 Sep 2017, at 19:13, Wangda Tan wrote: >> > >> > Hi all, >> > >> > As we discussed on [1], there were proposals from Steve / Vinod etc to >> have >> > a faster cadence of releases and to start thinking of a Hadoop 3.1 >> release >> > earlier than March 2018 as is currently proposed. >> > >> > I think this is a good idea. I'd like to start the process sooner, and >> > establish timeline etc so that we can be ready when 3.0.0 GA is out. With >> > this we can also establish faster cadence for future Hadoop 3.x releases. >> > >> > To this end, I propose to target Hadoop 3.1.0 for a release by mid Jan >> > 2018. (About 4.5 months from now and 2.5 months after 3.0-GA, instead of >> > 6.5 months from now). >> > >> > I'd also want to take this opportunity to come up with a more elaborate >> > release plan to avoid some of the confusion we had with 3.0 beta. General >> > proposal for the timeline (per this other proposal [2]) >> > - Feature freeze date - all features should be merged by Dec 15, 2017. >> > - Code freeze date - blockers/critical only, no more improvements and non >> > blocker/critical bug-fixes: Jan 1, 2018. >> > - Release date: Jan 15, 2018 >> > >> > Following is a list of features on my radar which could be candidates >> for a >> > 3.1 release: >> > - YARN-5734, Dynamic scheduler queue configuration. (Owner: Jonathan >> Hung) >> > - YARN-5881, Add absolute resource configuration to CapacityScheduler. >> > (Owner: Sunil) >> > - YARN-5673, Container-executor rewrite for better security, >> extensibility >> > and portability. (Owner Varun Vasudev) >> > - YARN-6223, GPU isolation. (Owner: Wangda) >> > >> > And from email [3] mentioned by Andrew, there’re several other HDFS >> > features want to be released with 3.1 as well, assuming they fit the >> > timelines: >> > - Storage Policy Satisfier >> > - HDFS tiered storage >> > >> > Please let me know if I missed any features targeted to 3.1 per this >> > timeline. >> >> >> HADOOP-13786 : S3Guard committer, which also adds resilience to failures >> talking to S3 (we barely have any today), >> >> > >> > And I want to volunteer myself as release manager of 3.1.0 release. >> Please >> > let me know if you have any suggestions/concerns. >> >> well volunteered :) >> >> > >> > Thanks, >> > Wangda Tan >> > >> > [1] http://markmail.org/message/hwar5f5ap654ck5o?q= >> > Branch+merges+and+3%2E0%2E0-beta1+scope >> > [2] http://markmail.org/message/hwar5f5ap654ck5o?q=Branch+ >> > merges+and+3%2E0%2E0-beta1+scope#query:Branch%20merges% >> > 20and%203.0.0-beta1%20scope+page:1+mid:2hqqkhl2dymcikf5+state:results >> > [3] http://markmail.org/message/h35obzqrh3ag6dgn?q=Branch+merge >> > s+and+3%2E0%2E0-beta1+scope
Re: [DISCUSS] Looking to Apache Hadoop 3.1 release
Thanks for starting this Wangda. I would also like to add: - YARN-5972: Support Pausing/Freezing of opportunistic containers Cheers -Arun On Wed, Sep 6, 2017 at 1:49 PM, Steve Loughranwrote: > > > On 6 Sep 2017, at 19:13, Wangda Tan wrote: > > > > Hi all, > > > > As we discussed on [1], there were proposals from Steve / Vinod etc to > have > > a faster cadence of releases and to start thinking of a Hadoop 3.1 > release > > earlier than March 2018 as is currently proposed. > > > > I think this is a good idea. I'd like to start the process sooner, and > > establish timeline etc so that we can be ready when 3.0.0 GA is out. With > > this we can also establish faster cadence for future Hadoop 3.x releases. > > > > To this end, I propose to target Hadoop 3.1.0 for a release by mid Jan > > 2018. (About 4.5 months from now and 2.5 months after 3.0-GA, instead of > > 6.5 months from now). > > > > I'd also want to take this opportunity to come up with a more elaborate > > release plan to avoid some of the confusion we had with 3.0 beta. General > > proposal for the timeline (per this other proposal [2]) > > - Feature freeze date - all features should be merged by Dec 15, 2017. > > - Code freeze date - blockers/critical only, no more improvements and non > > blocker/critical bug-fixes: Jan 1, 2018. > > - Release date: Jan 15, 2018 > > > > Following is a list of features on my radar which could be candidates > for a > > 3.1 release: > > - YARN-5734, Dynamic scheduler queue configuration. (Owner: Jonathan > Hung) > > - YARN-5881, Add absolute resource configuration to CapacityScheduler. > > (Owner: Sunil) > > - YARN-5673, Container-executor rewrite for better security, > extensibility > > and portability. (Owner Varun Vasudev) > > - YARN-6223, GPU isolation. (Owner: Wangda) > > > > And from email [3] mentioned by Andrew, there’re several other HDFS > > features want to be released with 3.1 as well, assuming they fit the > > timelines: > > - Storage Policy Satisfier > > - HDFS tiered storage > > > > Please let me know if I missed any features targeted to 3.1 per this > > timeline. > > > HADOOP-13786 : S3Guard committer, which also adds resilience to failures > talking to S3 (we barely have any today), > > > > > And I want to volunteer myself as release manager of 3.1.0 release. > Please > > let me know if you have any suggestions/concerns. > > well volunteered :) > > > > > Thanks, > > Wangda Tan > > > > [1] http://markmail.org/message/hwar5f5ap654ck5o?q= > > Branch+merges+and+3%2E0%2E0-beta1+scope > > [2] http://markmail.org/message/hwar5f5ap654ck5o?q=Branch+ > > merges+and+3%2E0%2E0-beta1+scope#query:Branch%20merges% > > 20and%203.0.0-beta1%20scope+page:1+mid:2hqqkhl2dymcikf5+state:results > > [3] http://markmail.org/message/h35obzqrh3ag6dgn?q=Branch+merge > > s+and+3%2E0%2E0-beta1+scope > >
Re: [DISCUSS] Looking to Apache Hadoop 3.1 release
> On 6 Sep 2017, at 19:13, Wangda Tanwrote: > > Hi all, > > As we discussed on [1], there were proposals from Steve / Vinod etc to have > a faster cadence of releases and to start thinking of a Hadoop 3.1 release > earlier than March 2018 as is currently proposed. > > I think this is a good idea. I'd like to start the process sooner, and > establish timeline etc so that we can be ready when 3.0.0 GA is out. With > this we can also establish faster cadence for future Hadoop 3.x releases. > > To this end, I propose to target Hadoop 3.1.0 for a release by mid Jan > 2018. (About 4.5 months from now and 2.5 months after 3.0-GA, instead of > 6.5 months from now). > > I'd also want to take this opportunity to come up with a more elaborate > release plan to avoid some of the confusion we had with 3.0 beta. General > proposal for the timeline (per this other proposal [2]) > - Feature freeze date - all features should be merged by Dec 15, 2017. > - Code freeze date - blockers/critical only, no more improvements and non > blocker/critical bug-fixes: Jan 1, 2018. > - Release date: Jan 15, 2018 > > Following is a list of features on my radar which could be candidates for a > 3.1 release: > - YARN-5734, Dynamic scheduler queue configuration. (Owner: Jonathan Hung) > - YARN-5881, Add absolute resource configuration to CapacityScheduler. > (Owner: Sunil) > - YARN-5673, Container-executor rewrite for better security, extensibility > and portability. (Owner Varun Vasudev) > - YARN-6223, GPU isolation. (Owner: Wangda) > > And from email [3] mentioned by Andrew, there’re several other HDFS > features want to be released with 3.1 as well, assuming they fit the > timelines: > - Storage Policy Satisfier > - HDFS tiered storage > > Please let me know if I missed any features targeted to 3.1 per this > timeline. HADOOP-13786 : S3Guard committer, which also adds resilience to failures talking to S3 (we barely have any today), > > And I want to volunteer myself as release manager of 3.1.0 release. Please > let me know if you have any suggestions/concerns. well volunteered :) > > Thanks, > Wangda Tan > > [1] http://markmail.org/message/hwar5f5ap654ck5o?q= > Branch+merges+and+3%2E0%2E0-beta1+scope > [2] http://markmail.org/message/hwar5f5ap654ck5o?q=Branch+ > merges+and+3%2E0%2E0-beta1+scope#query:Branch%20merges% > 20and%203.0.0-beta1%20scope+page:1+mid:2hqqkhl2dymcikf5+state:results > [3] http://markmail.org/message/h35obzqrh3ag6dgn?q=Branch+merge > s+and+3%2E0%2E0-beta1+scope
[DISCUSS] Looking to Apache Hadoop 3.1 release
Hi all, As we discussed on [1], there were proposals from Steve / Vinod etc to have a faster cadence of releases and to start thinking of a Hadoop 3.1 release earlier than March 2018 as is currently proposed. I think this is a good idea. I'd like to start the process sooner, and establish timeline etc so that we can be ready when 3.0.0 GA is out. With this we can also establish faster cadence for future Hadoop 3.x releases. To this end, I propose to target Hadoop 3.1.0 for a release by mid Jan 2018. (About 4.5 months from now and 2.5 months after 3.0-GA, instead of 6.5 months from now). I'd also want to take this opportunity to come up with a more elaborate release plan to avoid some of the confusion we had with 3.0 beta. General proposal for the timeline (per this other proposal [2]) - Feature freeze date - all features should be merged by Dec 15, 2017. - Code freeze date - blockers/critical only, no more improvements and non blocker/critical bug-fixes: Jan 1, 2018. - Release date: Jan 15, 2018 Following is a list of features on my radar which could be candidates for a 3.1 release: - YARN-5734, Dynamic scheduler queue configuration. (Owner: Jonathan Hung) - YARN-5881, Add absolute resource configuration to CapacityScheduler. (Owner: Sunil) - YARN-5673, Container-executor rewrite for better security, extensibility and portability. (Owner Varun Vasudev) - YARN-6223, GPU isolation. (Owner: Wangda) And from email [3] mentioned by Andrew, there’re several other HDFS features want to be released with 3.1 as well, assuming they fit the timelines: - Storage Policy Satisfier - HDFS tiered storage Please let me know if I missed any features targeted to 3.1 per this timeline. And I want to volunteer myself as release manager of 3.1.0 release. Please let me know if you have any suggestions/concerns. Thanks, Wangda Tan [1] http://markmail.org/message/hwar5f5ap654ck5o?q= Branch+merges+and+3%2E0%2E0-beta1+scope [2] http://markmail.org/message/hwar5f5ap654ck5o?q=Branch+ merges+and+3%2E0%2E0-beta1+scope#query:Branch%20merges% 20and%203.0.0-beta1%20scope+page:1+mid:2hqqkhl2dymcikf5+state:results [3] http://markmail.org/message/h35obzqrh3ag6dgn?q=Branch+merge s+and+3%2E0%2E0-beta1+scope
Re: [VOTE] Merge yarn-native-services branch into trunk
> Please correct me if I’m wrong, but the current summary of the branch, > post these changes, looks like: Sorry for confusion, I was actively writing the formal documentation for how to use/how it works etc. and will post soon in a few hours. > On Sep 6, 2017, at 10:15 AM, Allen Wittenauer> wrote: > > >> On Sep 5, 2017, at 6:23 PM, Jian He wrote: >> >>> If it doesn’t have all the bells and whistles, then it shouldn’t be on >>> port 53 by default. >> Sure, I’ll change the default port to not use 53 and document it. >>> *how* is it getting launched on a privileged port? It sounds like the >>> expectation is to run “command” as root. *ALL* of the previous daemons in >>> Hadoop that needed a privileged port used jsvc. Why isn’t this one? These >>> questions matter from a security standpoint. >> Yes, it is running as “root” to be able to use the privileged port. The DNS >> server is not yet integrated with the hadoop script. >> >>> Check the output. It’s pretty obviously borked: >> Thanks for pointing out. Missed this when rebasing onto trunk. > > > Please correct me if I’m wrong, but the current summary of the branch, > post these changes, looks like: > > * A bunch of mostly new Java code that may or may not have > javadocs (post-revert YARN-6877, still working out HADOOP-14835) > * ~1/3 of the docs are roadmap/TBD > * ~1/3 of the docs are for an optional DNS daemon that has no > end user hook to start it > * ~1/3 of the docs are for a REST API that comes from some > undefined daemon (apiserver?) > * Two new, but undocumented, subcommands to yarn > * There are no docs for admins or users on how to actually > start or use this completely new/separate/optional feature > > How are outside people (e.g., non-branch committers) supposed to test > this new feature under these conditions? > - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: [VOTE] Merge yarn-native-services branch into trunk
> On Sep 5, 2017, at 6:23 PM, Jian Hewrote: > >> If it doesn’t have all the bells and whistles, then it shouldn’t be on >> port 53 by default. > Sure, I’ll change the default port to not use 53 and document it. >> *how* is it getting launched on a privileged port? It sounds like the >> expectation is to run “command” as root. *ALL* of the previous daemons in >> Hadoop that needed a privileged port used jsvc. Why isn’t this one? These >> questions matter from a security standpoint. > Yes, it is running as “root” to be able to use the privileged port. The DNS > server is not yet integrated with the hadoop script. > >> Check the output. It’s pretty obviously borked: > Thanks for pointing out. Missed this when rebasing onto trunk. Please correct me if I’m wrong, but the current summary of the branch, post these changes, looks like: * A bunch of mostly new Java code that may or may not have javadocs (post-revert YARN-6877, still working out HADOOP-14835) * ~1/3 of the docs are roadmap/TBD * ~1/3 of the docs are for an optional DNS daemon that has no end user hook to start it * ~1/3 of the docs are for a REST API that comes from some undefined daemon (apiserver?) * Two new, but undocumented, subcommands to yarn * There are no docs for admins or users on how to actually start or use this completely new/separate/optional feature How are outside people (e.g., non-branch committers) supposed to test this new feature under these conditions? - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: why the doxia-snapshot dependency
> On Sep 6, 2017, at 9:53 AM, Steve Loughranwrote: > > Well, it turns out not to like depth-4 MD tags, of the form : DOXIA-533 > , though that looks like a long-standing issue, not a regression Yup. > workaround: don't use level4 titles. And do check locally before bothering to > upload the patch That’s actually one of the side-benefits. Most contributors never check their javadoc or site generation, leaving that up to Yetus. It’s obviously less than ideal but whatcha gonna do? Anyway, if site goes into an infinite loop, it basically eats up resources on the QA boxes until maven GCs. (Jenkins has trouble killing docker containers. We probably need to write some trap code in Yetus.) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-13894) s3a troubleshooting to cover the "JSON parse error" message
[ https://issues.apache.org/jira/browse/HADOOP-13894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-13894. - Resolution: Duplicate > s3a troubleshooting to cover the "JSON parse error" message > --- > > Key: HADOOP-13894 > URL: https://issues.apache.org/jira/browse/HADOOP-13894 > Project: Hadoop Common > Issue Type: Sub-task > Components: documentation, fs/s3 >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Priority: Minor > > Generally problems in s3 IO during list operations surface as JSON parse > errors, with the underlying cause lost (unchecked HTTP error code, > text/plain, text/html, interrupted thread). > Document this fact in the troubleshooting section -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: why the doxia-snapshot dependency
> On 6 Sep 2017, at 16:47, Allen Wittenauerwrote: > > >> On Sep 6, 2017, at 7:20 AM, Steve Loughran wrote: >> >> >> Every morning my laptop downloads the doxia 1.8 snapshot for its build >> >> > > …. > >> This implies that the build isn't reproducible, which isn't that bad for a >> short-lived dev branch, but not what we want for any releases > > > This version of doxia includes an upgraded version of the markdown > processor. Combined with the upgraded maven-site-plugin, fixes two very > important bugs: > > * MUCH better handling of URLs. Older versions would exit with failure > if it hit hdfs:// as a URL, despite being perfectly legal. [I’ve been > "hand-fixing” release notes and the like to avoid hitting this one.] > * Parser doesn’t have the infinite loop bug when it hit certain > combinations of broken markdown, usually tables. > > I agree that it’s… less than ideal. > > When I wrote the original version of that patch months ago, I was > hoping it was a stop-gap. Worst case, we publish our own version of the > plugin. But given that users will be empowered to fetch their own notes at > build time, I felt it was important that this be more bullet proof… > Well, it turns out not to like depth-4 MD tags, of the form : DOXIA-533 , though that looks like a long-standing issue, not a regression workaround: don't use level4 titles. And do check locally before bothering to upload the patch [INFO] [INFO] BUILD FAILURE [INFO] [INFO] Total time: 6.090 s [INFO] Finished at: 2017-09-06T16:21:42+01:00 [INFO] Final Memory: 43M/604M [INFO] [ERROR] Failed to execute goal org.apache.maven.plugins:maven-site-plugin:3.6:site (default-cli) on project hadoop-aws: Execution default-cli of goal org.apache.maven.plugins:maven-site-plugin:3.6:site failed. EmptyStackException -> [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-site-plugin:3.6:site (default-cli) on project hadoop-aws: Execution default-cli of goal org.apache.maven.plugins:maven-site-plugin:3.6:site failed. at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:212) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80) at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106) at org.apache.maven.cli.MavenCli.execute(MavenCli.java:863) at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:288) at org.apache.maven.cli.MavenCli.main(MavenCli.java:199) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415) at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356) Caused by: org.apache.maven.plugin.PluginExecutionException: Execution default-cli of goal org.apache.maven.plugins:maven-site-plugin:3.6:site failed. at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:145) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:207) ... 20 more Caused by: java.util.EmptyStackException at java.util.Stack.peek(Stack.java:102) at org.apache.maven.doxia.index.IndexingSink.peek(IndexingSink.java:292) at org.apache.maven.doxia.index.IndexingSink.text(IndexingSink.java:239) at org.apache.maven.doxia.sink.impl.SinkAdapter.text(SinkAdapter.java:874) at
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/515/ [Sep 5, 2017 9:46:07 AM] (kai.zheng) HDFS-12388. A bad error message in DFSStripedOutputStream. Contributed [Sep 5, 2017 1:16:57 PM] (stevel) HADOOP-14820 Wasb mkdirs security checks inconsistent with HDFS. [Sep 5, 2017 5:08:27 PM] (xiao) HDFS-12359. Re-encryption should operate with minimum KMS ACL [Sep 5, 2017 9:16:03 PM] (wang) HDFS-11882. Precisely calculate acked length of striped block groups in [Sep 5, 2017 10:11:37 PM] (weichiu) HADOOP-14688. Intern strings in KeyVersion and EncryptedKeyVersion. [Sep 5, 2017 11:33:29 PM] (wang) HDFS-12377. Refactor TestReadStripedFileWithDecoding to avoid test [Sep 6, 2017 5:29:52 AM] (kai.zheng) HDFS-12392. Writing striped file failed due to different cell size. [Sep 6, 2017 6:26:57 AM] (jzhuge) HADOOP-14103. Sort out hadoop-aws contract-test-options.xml. Contributed [Sep 6, 2017 6:34:55 AM] (xyao) HADOOP-14839. DistCp log output should contain copied and deleted files -1 overall The following subsystems voted -1: findbugs unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager Hard coded reference to an absolute pathname in org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext) At DockerLinuxContainerRuntime.java:absolute pathname in org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext) At DockerLinuxContainerRuntime.java:[line 490] Failed junit tests : hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery hadoop.hdfs.server.namenode.TestReencryptionWithKMS hadoop.hdfs.web.TestWebHDFSXAttr hadoop.hdfs.TestLeaseRecoveryStriped hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 hadoop.hdfs.web.TestFSMainOperationsWebHdfs hadoop.hdfs.TestReadStripedFileWithMissingBlocks hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation hadoop.yarn.client.cli.TestLogsCLI hadoop.mapreduce.v2.hs.webapp.TestHSWebApp hadoop.yarn.sls.TestReservationSystemInvariants hadoop.yarn.sls.TestSLSRunner Timed out junit tests : org.apache.hadoop.hdfs.TestWriteReadStripedFile org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/515/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/515/artifact/out/diff-compile-javac-root.txt [292K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/515/artifact/out/diff-checkstyle-root.txt [17M] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/515/artifact/out/diff-patch-pylint.txt [20K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/515/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/515/artifact/out/diff-patch-shelldocs.txt [12K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/515/artifact/out/whitespace-eol.txt [11M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/515/artifact/out/whitespace-tabs.txt [1.2M] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/515/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html [8.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/515/artifact/out/patch-javadoc-root.txt [2.0M] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/515/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [672K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/515/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [64K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/515/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt [16K]
Re: why the doxia-snapshot dependency
> On Sep 6, 2017, at 7:20 AM, Steve Loughranwrote: > > > Every morning my laptop downloads the doxia 1.8 snapshot for its build > > …. > This implies that the build isn't reproducible, which isn't that bad for a > short-lived dev branch, but not what we want for any releases This version of doxia includes an upgraded version of the markdown processor. Combined with the upgraded maven-site-plugin, fixes two very important bugs: * MUCH better handling of URLs. Older versions would exit with failure if it hit hdfs:// as a URL, despite being perfectly legal. [I’ve been "hand-fixing” release notes and the like to avoid hitting this one.] * Parser doesn’t have the infinite loop bug when it hit certain combinations of broken markdown, usually tables. I agree that it’s… less than ideal. When I wrote the original version of that patch months ago, I was hoping it was a stop-gap. Worst case, we publish our own version of the plugin. But given that users will be empowered to fetch their own notes at build time, I felt it was important that this be more bullet proof… - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: why the doxia-snapshot dependency
Allen mentioned the reason in HADOOP-14364. I guess 1.7 does not work. 1.8 is not released yet. Kihwal On Wed, Sep 6, 2017 at 9:20 AM, Steve Loughranwrote: > > Every morning my laptop downloads the doxia 1.8 snapshot for its build > > [INFO] > [INFO] --- maven-site-plugin:3.6:attach-descriptor (attach-descriptor) @ > hadoop-main --- > Downloading: https://repository.apache.org/snapshots/org/apache/maven/ > doxia/doxia-module-markdown/1.8-SNAPSHOT/maven-metadata.xml > Downloaded: https://repository.apache.org/snapshots/org/apache/maven/ > doxia/doxia-module-markdown/1.8-SNAPSHOT/maven-metadata.xml (790 B at 0.7 > KB/sec) > Downloading: https://repository.apache.org/snapshots/org/apache/maven/ > doxia/doxia-modules/1.8-SNAPSHOT/maven-metadata.xml > Downloaded: https://repository.apache.org/snapshots/org/apache/maven/ > doxia/doxia-modules/1.8-SNAPSHOT/maven-metadata.xml (820 B at 1.3 KB/sec) > Downloading: https://repository.apache.org/snapshots/org/apache/maven/ > doxia/doxia/1.8-SNAPSHOT/maven-metadata.xml > Downloaded: https://repository.apache.org/snapshots/org/apache/maven/ > doxia/doxia/1.8-SNAPSHOT/maven-metadata.xml (812 B at 1.3 KB/sec) > Downloading: https://repository.apache.org/snapshots/org/apache/maven/ > doxia/doxia-module-xhtml/1.8-SNAPSHOT/maven-metadata.xml > Downloaded: https://repository.apache.org/snapshots/org/apache/maven/ > doxia/doxia-module-xhtml/1.8-SNAPSHOT/maven-metadata.xml (787 B at 1.3 > KB/sec) > Downloading: https://repository.apache.org/snapshots/org/apache/maven/ > doxia/doxia-core/1.8-SNAPSHOT/maven-metadata.xml > Downloaded: https://repository.apache.org/snapshots/org/apache/maven/ > doxia/doxia-core/1.8-SNAPSHOT/maven-metadata.xml (990 B at 1.6 KB/sec) > Downloading: https://repository.apache.org/snapshots/org/apache/maven/ > doxia/doxia-sink-api/1.8-SNAPSHOT/maven-metadata.xml > Downloaded: https://repository.apache.org/snapshots/org/apache/maven/ > doxia/doxia-sink-api/1.8-SNAPSHOT/maven-metadata.xml (783 B at 1.2 KB/sec) > Downloading: https://repository.apache.org/snapshots/org/apache/maven/ > doxia/doxia-logging-api/1.8-SNAPSHOT/maven-metadata.xml > Downloaded: https://repository.apache.org/snapshots/org/apache/maven/ > doxia/doxia-logging-api/1.8-SNAPSHOT/maven-metadata.xml (786 B at 1.3 > KB/sec) > > This implies that the build isn't reproducible, which isn't that bad for a > short-lived dev branch, but not what we want for any releases > > > - > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: common-dev-h...@hadoop.apache.org > >
why the doxia-snapshot dependency
Every morning my laptop downloads the doxia 1.8 snapshot for its build [INFO] [INFO] --- maven-site-plugin:3.6:attach-descriptor (attach-descriptor) @ hadoop-main --- Downloading: https://repository.apache.org/snapshots/org/apache/maven/doxia/doxia-module-markdown/1.8-SNAPSHOT/maven-metadata.xml Downloaded: https://repository.apache.org/snapshots/org/apache/maven/doxia/doxia-module-markdown/1.8-SNAPSHOT/maven-metadata.xml (790 B at 0.7 KB/sec) Downloading: https://repository.apache.org/snapshots/org/apache/maven/doxia/doxia-modules/1.8-SNAPSHOT/maven-metadata.xml Downloaded: https://repository.apache.org/snapshots/org/apache/maven/doxia/doxia-modules/1.8-SNAPSHOT/maven-metadata.xml (820 B at 1.3 KB/sec) Downloading: https://repository.apache.org/snapshots/org/apache/maven/doxia/doxia/1.8-SNAPSHOT/maven-metadata.xml Downloaded: https://repository.apache.org/snapshots/org/apache/maven/doxia/doxia/1.8-SNAPSHOT/maven-metadata.xml (812 B at 1.3 KB/sec) Downloading: https://repository.apache.org/snapshots/org/apache/maven/doxia/doxia-module-xhtml/1.8-SNAPSHOT/maven-metadata.xml Downloaded: https://repository.apache.org/snapshots/org/apache/maven/doxia/doxia-module-xhtml/1.8-SNAPSHOT/maven-metadata.xml (787 B at 1.3 KB/sec) Downloading: https://repository.apache.org/snapshots/org/apache/maven/doxia/doxia-core/1.8-SNAPSHOT/maven-metadata.xml Downloaded: https://repository.apache.org/snapshots/org/apache/maven/doxia/doxia-core/1.8-SNAPSHOT/maven-metadata.xml (990 B at 1.6 KB/sec) Downloading: https://repository.apache.org/snapshots/org/apache/maven/doxia/doxia-sink-api/1.8-SNAPSHOT/maven-metadata.xml Downloaded: https://repository.apache.org/snapshots/org/apache/maven/doxia/doxia-sink-api/1.8-SNAPSHOT/maven-metadata.xml (783 B at 1.2 KB/sec) Downloading: https://repository.apache.org/snapshots/org/apache/maven/doxia/doxia-logging-api/1.8-SNAPSHOT/maven-metadata.xml Downloaded: https://repository.apache.org/snapshots/org/apache/maven/doxia/doxia-logging-api/1.8-SNAPSHOT/maven-metadata.xml (786 B at 1.3 KB/sec) This implies that the build isn't reproducible, which isn't that bad for a short-lived dev branch, but not what we want for any releases - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14843) FsPermission symbolic parsing failed to detect invalid argument
Jason Lowe created HADOOP-14843: --- Summary: FsPermission symbolic parsing failed to detect invalid argument Key: HADOOP-14843 URL: https://issues.apache.org/jira/browse/HADOOP-14843 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.8.1, 2.7.4 Reporter: Jason Lowe A user misunderstood the syntax format for the FsPermission symbolic constructor and passed the argument "-rwr" instead of "u=rw,g=r". In 2.7 and earlier this was silently misinterpreted as mode 0777 and in 2.8 it oddly became mode . In either case FsPermission should have flagged "-rwr" as an invalid argument rather than silently misinterpreting it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-14839) DistCp log output should contain copied and deleted files and directories
[ https://issues.apache.org/jira/browse/HADOOP-14839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin reopened HADOOP-14839: > DistCp log output should contain copied and deleted files and directories > - > > Key: HADOOP-14839 > URL: https://issues.apache.org/jira/browse/HADOOP-14839 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 2.7.1 >Reporter: Konstantin Shaposhnikov >Assignee: Yiqun Lin > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14839.006.patch, HADOOP-14839-branch-2.001.patch, > HDFS-10234.001.patch, HDFS-10234.002.patch, HDFS-10234.003.patch, > HDFS-10234.004.patch, HDFS-10234.005.patch > > > DistCp log output (specified via {{-log}} command line option) currently > contains only skipped and failed (when failures are ignored via {{-i}}) files. > It will be more useful if it also contains copied and deleted files and > created directories. > This should be fixed in > https://github.com/apache/hadoop/blob/branch-2.7.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org