[jira] [Commented] (HADOOP-13597) Switch KMS from Tomcat to Jetty
[ https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15737413#comment-15737413 ] Xiao Chen commented on HADOOP-13597: Thanks John for revving and others for reviewing. My comments below: - {{HttpServer2}}: Don't understand the {{skipSecretProvider}}. Could you explain and add some comments/javadocs? Is this about to the AuthenticationFilter and it's related secret providers startup tricks? - {{HttpServer2}}: Possible to have {{createHttpsChannelConnector}} call {{createHttpChannelConnector}} first, to reduce some duplicate codes? The {{httpConfig.setSecureScheme(HTTPS_SCHEME);}} line seems reasonable to be inside the method too. - I found the {{AccessLoggingConfiguration}} class naming confusing. Looking at the class javadoc didn't help much either - I only figured out until looking at the code usage. Can't find a good replacement in my vocabulary (appreciate if anyone else has better naming), but we should state in javadoc: 1 this a configuration object that logs each access. 2. it redacts sensitive information. Actually, maybe this is better to be a composites a {{Configuration}} rather than inherits? At least whoever uses it later don't have to figure out which method is an Override etc. (BTW, currently missing {{@Override}} annotations on all methods, and {{set}} seems to be missing a {{super.set}}.) - {{KMSWebServer}}: Nit - I think hadoop code mostly have static import on 1 line and ignore the 80-char rules. - {{KMSWebServer}}: Totally theoretical, it may be good to also have the {{isAlive}} method, and probably add a {{waitActive}}-ish method in the MiniKMS, so interested tests can call that and reduce flaky tests due to start up race. - Searching for {{tomcat}}, see several nitty references still: {{AuthenticationFiler}}'s var name {{isInitializedByTomcat}}, {{CommandsManual.md}} and {{SecureMode.md}}, KMS's doc {{index.md.vm}}, and some code comments etc. - For the passwords, agree with Robert on supportability. However I also see similar code in {{DFSUtil}} ({{loadSslConfToHttpServerBuilder}} and {{getPassword}}). Was these copied over? We should at least move that to a common util, and avoid this level of duplication. This will probably leave us not having to change {{Configuration}}, but adding a wrapper util. Or per Wei-Chiu's suggestion, maybe not needed any more. Appreciate more javadocs here too, as to why such method is needed. - Didn't see answer to Allen's ask about unit tests. (Take a look at hadoop-common-project/hadoop-common/src/test/scripts if you're wondering how that's done). - Nit: {{kms-site.xml}} is following other -site.xmls, to have a comment line "put site-specific...", which is good. Please follow them closer to have this line before the {{}} element. :) > Switch KMS from Tomcat to Jetty > --- > > Key: HADOOP-13597 > URL: https://issues.apache.org/jira/browse/HADOOP-13597 > Project: Hadoop Common > Issue Type: New Feature > Components: kms >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-13597.001.patch, HADOOP-13597.002.patch, > HADOOP-13597.003.patch > > > The Tomcat 6 we are using will reach EOL at the end of 2017. While there are > other good options, I would propose switching to {{Jetty 9}} for the > following reasons: > * Easier migration. Both Tomcat and Jetty are based on {{Servlet > Containers}}, so we don't have change client code that much. It would require > more work to switch to {{JAX-RS}}. > * Well established. > * Good performance and scalability. > Other alternatives: > * Jersey + Grizzly > * Tomcat 8 > Your opinions will be greatly appreciated. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13805) UGI.getCurrentUser() fails if user does not have a keytab associated
[ https://issues.apache.org/jira/browse/HADOOP-13805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-13805: --- Target Version/s: 2.8.0, 2.7.4, 3.0.0-alpha2 (was: 3.0.0-alpha2) > UGI.getCurrentUser() fails if user does not have a keytab associated > > > Key: HADOOP-13805 > URL: https://issues.apache.org/jira/browse/HADOOP-13805 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.8.0, 2.9.0, 3.0.0-alpha2 >Reporter: Alejandro Abdelnur >Assignee: Xiao Chen > Attachments: HADOOP-13805.01.patch, HADOOP-13805.02.patch, > HADOOP-13805.03.patch > > > HADOOP-13558 intention was to avoid UGI from trying to renew the TGT when the > UGI is created from an existing Subject as in that case the keytab is not > 'own' by UGI but by the creator of the Subject. > In HADOOP-13558 we introduced a new private UGI constructor > {{UserGroupInformation(Subject subject, final boolean externalKeyTab)}} and > we use with TRUE only when doing a {{UGI.loginUserFromSubject()}}. > The problem is, when we call {{UGI.getCurrentUser()}}, and UGI was created > via a Subject (via the {{UGI.loginUserFromSubject()}} method), we call {{new > UserGroupInformation(subject)}} which will delegate to > {{UserGroupInformation(Subject subject, final boolean externalKeyTab)}} and > that will use externalKeyTab == *FALSE*. > Then the UGI returned by {{UGI.getCurrentUser()}} will attempt to login using > a non-existing keytab if the TGT expired. > This problem is experienced in {{KMSClientProvider}} when used by the HDFS > filesystem client accessing an an encryption zone. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13805) UGI.getCurrentUser() fails if user does not have a keytab associated
[ https://issues.apache.org/jira/browse/HADOOP-13805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15737331#comment-15737331 ] Xiao Chen commented on HADOOP-13805: Thanks for the ping, I think this is major, but should target the same as HADOOP-13558. Tucu, please feel free to modify if you disagree. > UGI.getCurrentUser() fails if user does not have a keytab associated > > > Key: HADOOP-13805 > URL: https://issues.apache.org/jira/browse/HADOOP-13805 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.8.0, 2.9.0, 3.0.0-alpha2 >Reporter: Alejandro Abdelnur >Assignee: Xiao Chen > Attachments: HADOOP-13805.01.patch, HADOOP-13805.02.patch, > HADOOP-13805.03.patch > > > HADOOP-13558 intention was to avoid UGI from trying to renew the TGT when the > UGI is created from an existing Subject as in that case the keytab is not > 'own' by UGI but by the creator of the Subject. > In HADOOP-13558 we introduced a new private UGI constructor > {{UserGroupInformation(Subject subject, final boolean externalKeyTab)}} and > we use with TRUE only when doing a {{UGI.loginUserFromSubject()}}. > The problem is, when we call {{UGI.getCurrentUser()}}, and UGI was created > via a Subject (via the {{UGI.loginUserFromSubject()}} method), we call {{new > UserGroupInformation(subject)}} which will delegate to > {{UserGroupInformation(Subject subject, final boolean externalKeyTab)}} and > that will use externalKeyTab == *FALSE*. > Then the UGI returned by {{UGI.getCurrentUser()}} will attempt to login using > a non-existing keytab if the TGT expired. > This problem is experienced in {{KMSClientProvider}} when used by the HDFS > filesystem client accessing an an encryption zone. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13805) UGI.getCurrentUser() fails if user does not have a keytab associated
[ https://issues.apache.org/jira/browse/HADOOP-13805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-13805: --- Priority: Major (was: Blocker) > UGI.getCurrentUser() fails if user does not have a keytab associated > > > Key: HADOOP-13805 > URL: https://issues.apache.org/jira/browse/HADOOP-13805 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.8.0, 2.9.0, 3.0.0-alpha2 >Reporter: Alejandro Abdelnur >Assignee: Xiao Chen > Attachments: HADOOP-13805.01.patch, HADOOP-13805.02.patch, > HADOOP-13805.03.patch > > > HADOOP-13558 intention was to avoid UGI from trying to renew the TGT when the > UGI is created from an existing Subject as in that case the keytab is not > 'own' by UGI but by the creator of the Subject. > In HADOOP-13558 we introduced a new private UGI constructor > {{UserGroupInformation(Subject subject, final boolean externalKeyTab)}} and > we use with TRUE only when doing a {{UGI.loginUserFromSubject()}}. > The problem is, when we call {{UGI.getCurrentUser()}}, and UGI was created > via a Subject (via the {{UGI.loginUserFromSubject()}} method), we call {{new > UserGroupInformation(subject)}} which will delegate to > {{UserGroupInformation(Subject subject, final boolean externalKeyTab)}} and > that will use externalKeyTab == *FALSE*. > Then the UGI returned by {{UGI.getCurrentUser()}} will attempt to login using > a non-existing keytab if the TGT expired. > This problem is experienced in {{KMSClientProvider}} when used by the HDFS > filesystem client accessing an an encryption zone. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies
[ https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15737321#comment-15737321 ] Sean Busbey edited comment on HADOOP-11804 at 12/10/16 6:16 AM: -14 - rebased to trunk (4c38f11) - move compilation of our integration test to after the shaded artifacts they need exist. was (Author: busbey): -14 - move compilation of our integration test to after the shaded artifacts they need exist. > POC Hadoop Client w/o transitive dependencies > - > > Key: HADOOP-11804 > URL: https://issues.apache.org/jira/browse/HADOOP-11804 > Project: Hadoop Common > Issue Type: Sub-task > Components: build >Reporter: Sean Busbey >Assignee: Sean Busbey > Attachments: HADOOP-11804.1.patch, HADOOP-11804.10.patch, > HADOOP-11804.11.patch, HADOOP-11804.12.patch, HADOOP-11804.13.patch, > HADOOP-11804.14.patch, HADOOP-11804.2.patch, HADOOP-11804.3.patch, > HADOOP-11804.4.patch, HADOOP-11804.5.patch, HADOOP-11804.6.patch, > HADOOP-11804.7.patch, HADOOP-11804.8.patch, HADOOP-11804.9.patch, > hadoop-11804-client-test.tar.gz > > > make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to > talk with a Hadoop cluster without seeing any of the implementation > dependencies. > see proposal on parent for details. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies
[ https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HADOOP-11804: - Attachment: HADOOP-11804.14.patch -14 - move compilation of our integration test to after the shaded artifacts they need exist. > POC Hadoop Client w/o transitive dependencies > - > > Key: HADOOP-11804 > URL: https://issues.apache.org/jira/browse/HADOOP-11804 > Project: Hadoop Common > Issue Type: Sub-task > Components: build >Reporter: Sean Busbey >Assignee: Sean Busbey > Attachments: HADOOP-11804.1.patch, HADOOP-11804.10.patch, > HADOOP-11804.11.patch, HADOOP-11804.12.patch, HADOOP-11804.13.patch, > HADOOP-11804.14.patch, HADOOP-11804.2.patch, HADOOP-11804.3.patch, > HADOOP-11804.4.patch, HADOOP-11804.5.patch, HADOOP-11804.6.patch, > HADOOP-11804.7.patch, HADOOP-11804.8.patch, HADOOP-11804.9.patch, > hadoop-11804-client-test.tar.gz > > > make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to > talk with a Hadoop cluster without seeing any of the implementation > dependencies. > see proposal on parent for details. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13565) KerberosAuthenticationHandler#authenticate should not rebuild SPN based on client request
[ https://issues.apache.org/jira/browse/HADOOP-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15737306#comment-15737306 ] Hudson commented on HADOOP-13565: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10985 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10985/]) HADOOP-13565. KerberosAuthenticationHandler#authenticate should not (xyao: rev 4c38f11cec0664b70e52f9563052dca8fb17c33f) * (edit) hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/KerberosAuthenticationHandler.java > KerberosAuthenticationHandler#authenticate should not rebuild SPN based on > client request > - > > Key: HADOOP-13565 > URL: https://issues.apache.org/jira/browse/HADOOP-13565 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.5.0 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13565.00.patch, HADOOP-13565.01.patch, > HADOOP-13565.02.patch, HADOOP-13565.03.patch > > > In KerberosAuthenticationHandler#authenticate, we use canonicalized server > name derived from HTTP request to build server SPN and authenticate client. > This can be problematic if the HTTP client/server are running from a > non-local Kerberos realm that the local realm has trust with (e.g., NN UI). > For example, > The server is running its HTTP endpoint using SPN from the client realm: > hadoop.http.authentication.kerberos.principal > HTTP/_HOST/TEST.COM > When client sends request to namenode at http://NN1.example.com:50070 from > client.test@test.com. > The client talks to KDC first and gets a service ticket > HTTP/NN1.example.com/TEST.COM to authenticate with the server via SPNEGO > negotiation. > The authentication will end up with either no valid credential error or > checksum failure depending on the HTTP client naming resolution or HTTP Host > field from the request header provided by the browser. > The root cause is KerberosUtil.getServicePrincipal("HTTP", serverName)}} will > always return a SPN with local realm (HTTP/nn.example@example.com) no > matter the server login SPN is from that domain or not. > The proposed fix is to change to use default server login principal (by > passing null as the 1st parameter to gssManager.createCredential()) instead. > This way we avoid dependency on HTTP client behavior (Host header or name > resolution like CNAME) or assumption on the local realm. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13597) Switch KMS from Tomcat to Jetty
[ https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15737300#comment-15737300 ] John Zhuge commented on HADOOP-13597: - Thanks Robert for the review! Fixed 2 and 3. 1. I will post a simplified {{Configuration#getPasswordString}} in the next patch. It may still return null though for several reasons: 1) to be consistent with getPassword; 2) some passwords restrieved are simply stored somewhere and may not get used at all, and they are accessed, NPE is an ok indicator; 3) HttpServer2/SSLFactory callers can handle null passwords. > Switch KMS from Tomcat to Jetty > --- > > Key: HADOOP-13597 > URL: https://issues.apache.org/jira/browse/HADOOP-13597 > Project: Hadoop Common > Issue Type: New Feature > Components: kms >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-13597.001.patch, HADOOP-13597.002.patch, > HADOOP-13597.003.patch > > > The Tomcat 6 we are using will reach EOL at the end of 2017. While there are > other good options, I would propose switching to {{Jetty 9}} for the > following reasons: > * Easier migration. Both Tomcat and Jetty are based on {{Servlet > Containers}}, so we don't have change client code that much. It would require > more work to switch to {{JAX-RS}}. > * Well established. > * Good performance and scalability. > Other alternatives: > * Jersey + Grizzly > * Tomcat 8 > Your opinions will be greatly appreciated. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13565) KerberosAuthenticationHandler#authenticate should not rebuild SPN based on client request
[ https://issues.apache.org/jira/browse/HADOOP-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HADOOP-13565: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha2 2.8.0 Status: Resolved (was: Patch Available) Thanks [~jnp] for the review. I've commit the patch to trunk/branch-2/branch-2.8. > KerberosAuthenticationHandler#authenticate should not rebuild SPN based on > client request > - > > Key: HADOOP-13565 > URL: https://issues.apache.org/jira/browse/HADOOP-13565 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.5.0 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13565.00.patch, HADOOP-13565.01.patch, > HADOOP-13565.02.patch, HADOOP-13565.03.patch > > > In KerberosAuthenticationHandler#authenticate, we use canonicalized server > name derived from HTTP request to build server SPN and authenticate client. > This can be problematic if the HTTP client/server are running from a > non-local Kerberos realm that the local realm has trust with (e.g., NN UI). > For example, > The server is running its HTTP endpoint using SPN from the client realm: > hadoop.http.authentication.kerberos.principal > HTTP/_HOST/TEST.COM > When client sends request to namenode at http://NN1.example.com:50070 from > client.test@test.com. > The client talks to KDC first and gets a service ticket > HTTP/NN1.example.com/TEST.COM to authenticate with the server via SPNEGO > negotiation. > The authentication will end up with either no valid credential error or > checksum failure depending on the HTTP client naming resolution or HTTP Host > field from the request header provided by the browser. > The root cause is KerberosUtil.getServicePrincipal("HTTP", serverName)}} will > always return a SPN with local realm (HTTP/nn.example@example.com) no > matter the server login SPN is from that domain or not. > The proposed fix is to change to use default server login principal (by > passing null as the 1st parameter to gssManager.createCredential()) instead. > This way we avoid dependency on HTTP client behavior (Host header or name > resolution like CNAME) or assumption on the local realm. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13883) Add description of -fs option in generic command usage
[ https://issues.apache.org/jira/browse/HADOOP-13883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HADOOP-13883: --- Attachment: HADOOP-13883-addendum.patch I made a code glance in hdfs project and found 14 commands that use the {{ToolRunner#run}}. And that means these commands can support '-fs' option. Attached the addendum patch for this JIRA. [~brahmareddy], I see that HDFS-11226 hasn't been merged, can you apply this minor change into your latest patch and you can add {{hdfs storagepolicies}} as well? Thanks a lot for that. > Add description of -fs option in generic command usage > -- > > Key: HADOOP-13883 > URL: https://issues.apache.org/jira/browse/HADOOP-13883 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13883-addendum.patch, HADOOP-13883.001.patch > > > Currently the description of option '-fs' is missing in generic command > usage in documentation {{CommandManual.md}}. And the users won't know to use > this option, while this option already makes sense to {{hdfs dfsadmin}}, > {{hdfs fsck}}, etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13883) Add description of -fs option in generic command usage
[ https://issues.apache.org/jira/browse/HADOOP-13883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15737123#comment-15737123 ] Yiqun Lin edited comment on HADOOP-13883 at 12/10/16 3:48 AM: -- {quote} I feel,this should goto branc-2.8 and branch-2 also {quote} Agreed on this. Sorry to make you confused, [~brahmareddy]. Since this issue exists in trunk, I add the 3.0 alpha2 to affect version. {quote} It's not listed there because not everything supports -fs. {quote} Now I see the '-fs' option can be used in many hdfs subcommands, but these commands hardly not documented the usage of '-fs'. Do you mean we should add the usage of '-fs' option one by one? I think one better way based on the v01 patch is to add the supported hdfs commands which can used '-fs' option in the documentation. was (Author: linyiqun): {quote} I feel,this should goto branc-2.8 and branch-2 also {quote} Agreed. Sorry to make you confused, [~brahmareddy]. {quote} It's not listed there because not everything supports -fs. {quote} Now I see the '-fs' option can be used in many hdfs subcommands, but these commands hardly not documented the usage of '-fs'. Do you mean we should add the usage of '-fs' option one by one? I think one better way based on the v01 patch is to add the supported hdfs commands which can used '-fs' option in the documentation. > Add description of -fs option in generic command usage > -- > > Key: HADOOP-13883 > URL: https://issues.apache.org/jira/browse/HADOOP-13883 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13883.001.patch > > > Currently the description of option '-fs' is missing in generic command > usage in documentation {{CommandManual.md}}. And the users won't know to use > this option, while this option already makes sense to {{hdfs dfsadmin}}, > {{hdfs fsck}}, etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13883) Add description of -fs option in generic command usage
[ https://issues.apache.org/jira/browse/HADOOP-13883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15737123#comment-15737123 ] Yiqun Lin edited comment on HADOOP-13883 at 12/10/16 3:35 AM: -- {quote} I feel,this should goto branc-2.8 and branch-2 also {quote} Agreed. Sorry to make you confused, [~brahmareddy]. {quote} It's not listed there because not everything supports -fs. {quote} Now I see the '-fs' option can be used in many hdfs subcommands, but these commands hardly not documented the usage of '-fs'. Do you mean we should add the usage of '-fs' option one by one? I think one better way based on the v01 patch is to add the supported hdfs commands which can used '-fs' option in the documentation. was (Author: linyiqun): f{quote} I feel,this should goto branc-2.8 and branch-2 also {quote} Agreed. Sorry to make you confused, [~brahmareddy]. {quote} It's not listed there because not everything supports -fs. {quote} Now I see the '-fs' option can be used in many hdfs subcommands, but these commands hardly not documented the usage of '-fs'. Do you mean we should add the usage of '-fs' option one by one? I think one better way based on the v01 patch is to add the supported hdfs commands which can used '-fs' option in the documentation. > Add description of -fs option in generic command usage > -- > > Key: HADOOP-13883 > URL: https://issues.apache.org/jira/browse/HADOOP-13883 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13883.001.patch > > > Currently the description of option '-fs' is missing in generic command > usage in documentation {{CommandManual.md}}. And the users won't know to use > this option, while this option already makes sense to {{hdfs dfsadmin}}, > {{hdfs fsck}}, etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13883) Add description of -fs option in generic command usage
[ https://issues.apache.org/jira/browse/HADOOP-13883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15737123#comment-15737123 ] Yiqun Lin commented on HADOOP-13883: f{quote} I feel,this should goto branc-2.8 and branch-2 also {quote} Agreed. Sorry to make you confused, [~brahmareddy]. {quote} It's not listed there because not everything supports -fs. {quote} Now I see the '-fs' option can be used in many hdfs subcommands, but these commands hardly not documented the usage of '-fs'. Do you mean we should add the usage of '-fs' option one by one? I think one better way based on the v01 patch is to add the supported hdfs commands which can used '-fs' option in the documentation. > Add description of -fs option in generic command usage > -- > > Key: HADOOP-13883 > URL: https://issues.apache.org/jira/browse/HADOOP-13883 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13883.001.patch > > > Currently the description of option '-fs' is missing in generic command > usage in documentation {{CommandManual.md}}. And the users won't know to use > this option, while this option already makes sense to {{hdfs dfsadmin}}, > {{hdfs fsck}}, etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11614) Remove httpclient dependency from hadoop-openstack
[ https://issues.apache.org/jira/browse/HADOOP-11614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15737044#comment-15737044 ] Hadoop QA commented on HADOOP-11614: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s{color} | {color:green} hadoop-tools_hadoop-openstack generated 0 new + 6 unchanged - 1 fixed = 6 total (was 7) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 12s{color} | {color:orange} hadoop-tools/hadoop-openstack: The patch generated 13 new + 133 unchanged - 126 fixed = 146 total (was 259) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s{color} | {color:green} hadoop-openstack in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 14m 31s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-11614 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12842642/HADOOP-11614-005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle | | uname | Linux 83b5649f84cd 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 92a8917 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/11238/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-openstack.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/11238/testReport/ | | modules | C: hadoop-tools/hadoop-openstack U: hadoop-tools/hadoop-openstack | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/11238/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Remove httpclient dependency from hadoop-openstack > -- > > Key: HADOOP-11614 >
[jira] [Updated] (HADOOP-11614) Remove httpclient dependency from hadoop-openstack
[ https://issues.apache.org/jira/browse/HADOOP-11614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-11614: --- Attachment: HADOOP-11614-005.patch 005 patch: Rebased, and removed trailing whitespaces. > Remove httpclient dependency from hadoop-openstack > -- > > Key: HADOOP-11614 > URL: https://issues.apache.org/jira/browse/HADOOP-11614 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Assignee: Brahma Reddy Battula >Priority: Blocker > Attachments: HADOOP-11614-002.patch, HADOOP-11614-003.patch, > HADOOP-11614-004.patch, HADOOP-11614-005.patch, HADOOP-11614.patch > > > Remove httpclient dependency from hadoop-openstack and its pom.xml file. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-11614) Remove httpclient dependency from hadoop-openstack
[ https://issues.apache.org/jira/browse/HADOOP-11614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-11614: --- Target Version/s: 3.0.0-beta1 (was: 3.0.0-alpha2) Thanks [~andrew.wang] for taking care of this. Updated the target versions. > Remove httpclient dependency from hadoop-openstack > -- > > Key: HADOOP-11614 > URL: https://issues.apache.org/jira/browse/HADOOP-11614 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Assignee: Brahma Reddy Battula >Priority: Blocker > Attachments: HADOOP-11614-002.patch, HADOOP-11614-003.patch, > HADOOP-11614-004.patch, HADOOP-11614.patch > > > Remove httpclient dependency from hadoop-openstack and its pom.xml file. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13824) FsShell can suppress the real error if no error message is present
[ https://issues.apache.org/jira/browse/HADOOP-13824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-13824: - Fix Version/s: 3.0.0-alpha2 > FsShell can suppress the real error if no error message is present > -- > > Key: HADOOP-13824 > URL: https://issues.apache.org/jira/browse/HADOOP-13824 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.7.1, 2.7.3 >Reporter: Rob Vesse >Assignee: John Zhuge > Labels: supportability > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13824.001.patch, HADOOP-13824.002.patch, > HADOOP-13824.003.patch > > > The {{FsShell}} error handling assumes in {{displayError()}} that the > {{message}} argument is not {{null}}. However in the case where it is this > leads to a NPE which results in suppressing the actual error information > since a higher level of error handling kicks in and just dumps the stack > trace of the NPE instead. > e.g. > {noformat} > Exception in thread "main" java.lang.NullPointerException > at org.apache.hadoop.fs.FsShell.displayError(FsShell.java:304) > at org.apache.hadoop.fs.FsShell.run(FsShell.java:289) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) > at org.apache.hadoop.fs.FsShell.main(FsShell.java:340) > {noformat} > This is deeply unhelpful because depending on what the underlying error was > there may be no stack dumped/logged for it (as HADOOP-7114 provides) since > {{FsShell}} doesn't explicitly dump traces for {{IllegalArgumentException}} > which appears to be the underlying cause of my issue. Line 289 is where > {{displayError()}} is called for {{IllegalArgumentException}} handling and > that catch clause does not log the error. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13881) Remove deprecated APIs added in HADOOP-6709
[ https://issues.apache.org/jira/browse/HADOOP-13881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736958#comment-15736958 ] Akira Ajisaka edited comment on HADOOP-13881 at 12/10/16 1:42 AM: -- bq. Hi Akira, could you comment on downstream usage of these APIs? These are pretty easy for us to keep supporting, so there isn't that much upside from removing them. I couldn't comment that. If we don't remove these deprecated APIs in Hadoop 3, we need to support them for another 2-4 years and the next timing is probably Hadoop 4. For this reason, I filed this jira. I agreed that supporting these APIs are easy, so closing this. was (Author: ajisakaa): bq, Hi Akira, could you comment on downstream usage of these APIs? These are pretty easy for us to keep supporting, so there isn't that much upside from removing them. I couldn't comment that. If we don't remove these deprecated APIs in Hadoop 3, we need to support them for another 2-4 years and the next timing is probably Hadoop 4. For this reason, I filed this jira. I agreed that supporting these APIs are easy, so closing this. > Remove deprecated APIs added in HADOOP-6709 > --- > > Key: HADOOP-13881 > URL: https://issues.apache.org/jira/browse/HADOOP-13881 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-13881.01.patch > > > FileSystem#getName, getNamed, getReplication(Path), delete(Path), > getLength(Path), getBlockSize(Path) were re-instated about 6 years ago. They > can be removed in Hadoop 3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13881) Remove deprecated APIs added in HADOOP-6709
[ https://issues.apache.org/jira/browse/HADOOP-13881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-13881: --- Resolution: Won't Fix Assignee: (was: Akira Ajisaka) Status: Resolved (was: Patch Available) > Remove deprecated APIs added in HADOOP-6709 > --- > > Key: HADOOP-13881 > URL: https://issues.apache.org/jira/browse/HADOOP-13881 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka > Attachments: HADOOP-13881.01.patch > > > FileSystem#getName, getNamed, getReplication(Path), delete(Path), > getLength(Path), getBlockSize(Path) were re-instated about 6 years ago. They > can be removed in Hadoop 3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13881) Remove deprecated APIs added in HADOOP-6709
[ https://issues.apache.org/jira/browse/HADOOP-13881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736958#comment-15736958 ] Akira Ajisaka commented on HADOOP-13881: bq, Hi Akira, could you comment on downstream usage of these APIs? These are pretty easy for us to keep supporting, so there isn't that much upside from removing them. I couldn't comment that. If we don't remove these deprecated APIs in Hadoop 3, we need to support them for another 2-4 years and the next timing is probably Hadoop 4. For this reason, I filed this jira. I agreed that supporting these APIs are easy, so closing this. > Remove deprecated APIs added in HADOOP-6709 > --- > > Key: HADOOP-13881 > URL: https://issues.apache.org/jira/browse/HADOOP-13881 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-13881.01.patch > > > FileSystem#getName, getNamed, getReplication(Path), delete(Path), > getLength(Path), getBlockSize(Path) were re-instated about 6 years ago. They > can be removed in Hadoop 3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.
[ https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736924#comment-15736924 ] Lei (Eddy) Xu commented on HADOOP-13449: Thanks a lot for the great work here, [~liuml07] and [~fabbri]. It is great that I can integrate this patch to HADOOP-13650 now. I will start to work on it today and keep you guys updated. > S3Guard: Implement DynamoDBMetadataStore. > - > > Key: HADOOP-13449 > URL: https://issues.apache.org/jira/browse/HADOOP-13449 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Mingliang Liu > Fix For: HADOOP-13345 > > Attachments: HADOOP-13449-HADOOP-13345.000.patch, > HADOOP-13449-HADOOP-13345.001.patch, HADOOP-13449-HADOOP-13345.002.patch, > HADOOP-13449-HADOOP-13345.003.patch, HADOOP-13449-HADOOP-13345.004.patch, > HADOOP-13449-HADOOP-13345.005.patch, HADOOP-13449-HADOOP-13345.006.patch, > HADOOP-13449-HADOOP-13345.007.patch, HADOOP-13449-HADOOP-13345.008.patch, > HADOOP-13449-HADOOP-13345.009.patch, HADOOP-13449-HADOOP-13345.010.patch, > HADOOP-13449-HADOOP-13345.011.patch, HADOOP-13449-HADOOP-13345.012.patch, > HADOOP-13449-HADOOP-13345.013.patch > > > Provide an implementation of the metadata store backed by DynamoDB. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11614) Remove httpclient dependency from hadoop-openstack
[ https://issues.apache.org/jira/browse/HADOOP-11614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736915#comment-15736915 ] Andrew Wang commented on HADOOP-11614: -- Ping, what's the plan for this JIRA? It's marked as a blocker, but been sitting idle for a month. If there's no plan to immediately resolve this, I'd like to downgrade the priority. Thanks all. > Remove httpclient dependency from hadoop-openstack > -- > > Key: HADOOP-11614 > URL: https://issues.apache.org/jira/browse/HADOOP-11614 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Assignee: Brahma Reddy Battula >Priority: Blocker > Attachments: HADOOP-11614-002.patch, HADOOP-11614-003.patch, > HADOOP-11614-004.patch, HADOOP-11614.patch > > > Remove httpclient dependency from hadoop-openstack and its pom.xml file. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13805) UGI.getCurrentUser() fails if user does not have a keytab associated
[ https://issues.apache.org/jira/browse/HADOOP-13805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736880#comment-15736880 ] Andrew Wang commented on HADOOP-13805: -- Is this a release blocker? Also HADOOP-13558 has fix versions of 2.7.4 and 2.8.0, is this targeted at those releases as well? > UGI.getCurrentUser() fails if user does not have a keytab associated > > > Key: HADOOP-13805 > URL: https://issues.apache.org/jira/browse/HADOOP-13805 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.8.0, 2.9.0, 3.0.0-alpha2 >Reporter: Alejandro Abdelnur >Assignee: Xiao Chen >Priority: Blocker > Attachments: HADOOP-13805.01.patch, HADOOP-13805.02.patch, > HADOOP-13805.03.patch > > > HADOOP-13558 intention was to avoid UGI from trying to renew the TGT when the > UGI is created from an existing Subject as in that case the keytab is not > 'own' by UGI but by the creator of the Subject. > In HADOOP-13558 we introduced a new private UGI constructor > {{UserGroupInformation(Subject subject, final boolean externalKeyTab)}} and > we use with TRUE only when doing a {{UGI.loginUserFromSubject()}}. > The problem is, when we call {{UGI.getCurrentUser()}}, and UGI was created > via a Subject (via the {{UGI.loginUserFromSubject()}} method), we call {{new > UserGroupInformation(subject)}} which will delegate to > {{UserGroupInformation(Subject subject, final boolean externalKeyTab)}} and > that will use externalKeyTab == *FALSE*. > Then the UGI returned by {{UGI.getCurrentUser()}} will attempt to login using > a non-existing keytab if the TGT expired. > This problem is experienced in {{KMSClientProvider}} when used by the HDFS > filesystem client accessing an an encryption zone. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname, incase multiple loopback addresses are present in /etc/hosts
[ https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-12687: - Priority: Major (was: Blocker) I'm downgrading this issue since it doesn't look like a release blocker. > SecureUtil#getByName should also try to resolve direct hostname, incase > multiple loopback addresses are present in /etc/hosts > - > > Key: HADOOP-12687 > URL: https://issues.apache.org/jira/browse/HADOOP-12687 > Project: Hadoop Common > Issue Type: Bug >Reporter: Junping Du >Assignee: Sunil G > Labels: security > Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, > 0003-HADOOP-12687.patch, 0004-HADOOP-12687.patch > > > From > https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt, > we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get > timeout which can be reproduced locally. > When {{/etc/hosts}} has multiple loopback entries, > {{InetAddress.getByName(null)}} will be returning the first entry present in > etc/hosts. Hence its possible that machine hostname can be second in list and > cause {{UnKnownHostException}}. > Suggesting a direct resolve for such hostname scenarios. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13886) s3guard: ITestS3AFileOperationCost.testFakeDirectoryDeletion failure
[ https://issues.apache.org/jira/browse/HADOOP-13886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736762#comment-15736762 ] Aaron Fabbri commented on HADOOP-13886: --- [~liuml07] wrote an initial analysis of this failure [here|https://issues.apache.org/jira/browse/HADOOP-13449?focusedCommentId=15734093=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15734093] > s3guard: ITestS3AFileOperationCost.testFakeDirectoryDeletion failure > - > > Key: HADOOP-13886 > URL: https://issues.apache.org/jira/browse/HADOOP-13886 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Aaron Fabbri > > testFakeDirectoryDeletion(org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost) > Time elapsed: 10.011 sec <<< FAILURE! > java.lang.AssertionError: after rename(srcFilePath, destFilePath): > directories_created expected:<1> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at > org.apache.hadoop.fs.s3a.S3ATestUtils$MetricDiff.assertDiffEquals(S3ATestUtils.java:431) > at > org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost.testFakeDirectoryDeletion(ITestS3AFileOperationCost.java:254) > More details to follow in comments. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13824) FsShell can suppress the real error if no error message is present
[ https://issues.apache.org/jira/browse/HADOOP-13824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736749#comment-15736749 ] John Zhuge commented on HADOOP-13824: - Thanks [~jojochuang] for the review and commit! Thanks [~rvesse] for reporting and [~ste...@apache.org] for the review. > FsShell can suppress the real error if no error message is present > -- > > Key: HADOOP-13824 > URL: https://issues.apache.org/jira/browse/HADOOP-13824 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.7.1, 2.7.3 >Reporter: Rob Vesse >Assignee: John Zhuge > Labels: supportability > Fix For: 2.8.0 > > Attachments: HADOOP-13824.001.patch, HADOOP-13824.002.patch, > HADOOP-13824.003.patch > > > The {{FsShell}} error handling assumes in {{displayError()}} that the > {{message}} argument is not {{null}}. However in the case where it is this > leads to a NPE which results in suppressing the actual error information > since a higher level of error handling kicks in and just dumps the stack > trace of the NPE instead. > e.g. > {noformat} > Exception in thread "main" java.lang.NullPointerException > at org.apache.hadoop.fs.FsShell.displayError(FsShell.java:304) > at org.apache.hadoop.fs.FsShell.run(FsShell.java:289) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) > at org.apache.hadoop.fs.FsShell.main(FsShell.java:340) > {noformat} > This is deeply unhelpful because depending on what the underlying error was > there may be no stack dumped/logged for it (as HADOOP-7114 provides) since > {{FsShell}} doesn't explicitly dump traces for {{IllegalArgumentException}} > which appears to be the underlying cause of my issue. Line 289 is where > {{displayError()}} is called for {{IllegalArgumentException}} handling and > that catch clause does not log the error. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.
[ https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736711#comment-15736711 ] Mingliang Liu commented on HADOOP-13449: Yes just edited the comment above. Lei helped us a lot in reviewing patches. I think we can get [HADOOP-13650] in soon after review. > S3Guard: Implement DynamoDBMetadataStore. > - > > Key: HADOOP-13449 > URL: https://issues.apache.org/jira/browse/HADOOP-13449 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Mingliang Liu > Fix For: HADOOP-13345 > > Attachments: HADOOP-13449-HADOOP-13345.000.patch, > HADOOP-13449-HADOOP-13345.001.patch, HADOOP-13449-HADOOP-13345.002.patch, > HADOOP-13449-HADOOP-13345.003.patch, HADOOP-13449-HADOOP-13345.004.patch, > HADOOP-13449-HADOOP-13345.005.patch, HADOOP-13449-HADOOP-13345.006.patch, > HADOOP-13449-HADOOP-13345.007.patch, HADOOP-13449-HADOOP-13345.008.patch, > HADOOP-13449-HADOOP-13345.009.patch, HADOOP-13449-HADOOP-13345.010.patch, > HADOOP-13449-HADOOP-13345.011.patch, HADOOP-13449-HADOOP-13345.012.patch, > HADOOP-13449-HADOOP-13345.013.patch > > > Provide an implementation of the metadata store backed by DynamoDB. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13887) Support for client-side encryption in S3A file system
Jeeyoung Kim created HADOOP-13887: - Summary: Support for client-side encryption in S3A file system Key: HADOOP-13887 URL: https://issues.apache.org/jira/browse/HADOOP-13887 Project: Hadoop Common Issue Type: New Feature Reporter: Jeeyoung Kim Priority: Minor Expose the client-side encryption option documented in Amazon S3 documentation - http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html Currently this is not exposed in Hadoop but it is exposed as an option in AWS Java SDK, which Hadoop currently includes. It should be trivial to propagate this as a parameter passed to the S3client used in S3AFileSystem.java -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.
[ https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736704#comment-15736704 ] Aaron Fabbri commented on HADOOP-13449: --- You are welcome! Thanks also to [~eddyxu] for his help getting to this point. > S3Guard: Implement DynamoDBMetadataStore. > - > > Key: HADOOP-13449 > URL: https://issues.apache.org/jira/browse/HADOOP-13449 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Mingliang Liu > Fix For: HADOOP-13345 > > Attachments: HADOOP-13449-HADOOP-13345.000.patch, > HADOOP-13449-HADOOP-13345.001.patch, HADOOP-13449-HADOOP-13345.002.patch, > HADOOP-13449-HADOOP-13345.003.patch, HADOOP-13449-HADOOP-13345.004.patch, > HADOOP-13449-HADOOP-13345.005.patch, HADOOP-13449-HADOOP-13345.006.patch, > HADOOP-13449-HADOOP-13345.007.patch, HADOOP-13449-HADOOP-13345.008.patch, > HADOOP-13449-HADOOP-13345.009.patch, HADOOP-13449-HADOOP-13345.010.patch, > HADOOP-13449-HADOOP-13345.011.patch, HADOOP-13449-HADOOP-13345.012.patch, > HADOOP-13449-HADOOP-13345.013.patch > > > Provide an implementation of the metadata store backed by DynamoDB. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13824) FsShell can suppress the real error if no error message is present
[ https://issues.apache.org/jira/browse/HADOOP-13824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-13824: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) Committed to trunk, branch-2 and branch-2.8. Thanks to [~rvesse] for reporting the issue, [~jzhuge] for contributing the patch and [~steve_l] for reviewing it! > FsShell can suppress the real error if no error message is present > -- > > Key: HADOOP-13824 > URL: https://issues.apache.org/jira/browse/HADOOP-13824 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.7.1, 2.7.3 >Reporter: Rob Vesse >Assignee: John Zhuge > Labels: supportability > Fix For: 2.8.0 > > Attachments: HADOOP-13824.001.patch, HADOOP-13824.002.patch, > HADOOP-13824.003.patch > > > The {{FsShell}} error handling assumes in {{displayError()}} that the > {{message}} argument is not {{null}}. However in the case where it is this > leads to a NPE which results in suppressing the actual error information > since a higher level of error handling kicks in and just dumps the stack > trace of the NPE instead. > e.g. > {noformat} > Exception in thread "main" java.lang.NullPointerException > at org.apache.hadoop.fs.FsShell.displayError(FsShell.java:304) > at org.apache.hadoop.fs.FsShell.run(FsShell.java:289) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) > at org.apache.hadoop.fs.FsShell.main(FsShell.java:340) > {noformat} > This is deeply unhelpful because depending on what the underlying error was > there may be no stack dumped/logged for it (as HADOOP-7114 provides) since > {{FsShell}} doesn't explicitly dump traces for {{IllegalArgumentException}} > which appears to be the underlying cause of my issue. Line 289 is where > {{displayError()}} is called for {{IllegalArgumentException}} handling and > that catch clause does not log the error. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13886) s3guard: ITestS3AFileOperationCost.testFakeDirectoryDeletion failure
[ https://issues.apache.org/jira/browse/HADOOP-13886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-13886: --- Affects Version/s: HADOOP-13345 > s3guard: ITestS3AFileOperationCost.testFakeDirectoryDeletion failure > - > > Key: HADOOP-13886 > URL: https://issues.apache.org/jira/browse/HADOOP-13886 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Aaron Fabbri > > testFakeDirectoryDeletion(org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost) > Time elapsed: 10.011 sec <<< FAILURE! > java.lang.AssertionError: after rename(srcFilePath, destFilePath): > directories_created expected:<1> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at > org.apache.hadoop.fs.s3a.S3ATestUtils$MetricDiff.assertDiffEquals(S3ATestUtils.java:431) > at > org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost.testFakeDirectoryDeletion(ITestS3AFileOperationCost.java:254) > More details to follow in comments. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.
[ https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736694#comment-15736694 ] Mingliang Liu edited comment on HADOOP-13449 at 12/9/16 11:43 PM: -- Thank you [~fabbri] for your great help, discussion, review, testing, bug fixing! Thanks [~ste...@apache.org] and [~cnauroth] for the helpful discussion and initial patch. Thank you [~eddyxu] for insightful comments to make the patch in good shape. Let's move on to other tasks and make this feature branch be merged to trunk early. was (Author: liuml07): Thank you [~fabbri] for your great help, discussion, review, testing, bug fixing! Thanks [~ste...@apache.org] and [~cnauroth] for the discussion and initial patch. Let's move on to other tasks and make this feature branch be merged to trunk early. > S3Guard: Implement DynamoDBMetadataStore. > - > > Key: HADOOP-13449 > URL: https://issues.apache.org/jira/browse/HADOOP-13449 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Mingliang Liu > Fix For: HADOOP-13345 > > Attachments: HADOOP-13449-HADOOP-13345.000.patch, > HADOOP-13449-HADOOP-13345.001.patch, HADOOP-13449-HADOOP-13345.002.patch, > HADOOP-13449-HADOOP-13345.003.patch, HADOOP-13449-HADOOP-13345.004.patch, > HADOOP-13449-HADOOP-13345.005.patch, HADOOP-13449-HADOOP-13345.006.patch, > HADOOP-13449-HADOOP-13345.007.patch, HADOOP-13449-HADOOP-13345.008.patch, > HADOOP-13449-HADOOP-13345.009.patch, HADOOP-13449-HADOOP-13345.010.patch, > HADOOP-13449-HADOOP-13345.011.patch, HADOOP-13449-HADOOP-13345.012.patch, > HADOOP-13449-HADOOP-13345.013.patch > > > Provide an implementation of the metadata store backed by DynamoDB. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.
[ https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736694#comment-15736694 ] Mingliang Liu commented on HADOOP-13449: Thank you [~fabbri] for your great help, discussion, review, testing, bug fixing! Thanks [~ste...@apache.org] and [~cnauroth] for the discussion and initial patch. Let's move on to other tasks and make this feature branch be merged to trunk early. > S3Guard: Implement DynamoDBMetadataStore. > - > > Key: HADOOP-13449 > URL: https://issues.apache.org/jira/browse/HADOOP-13449 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Mingliang Liu > Fix For: HADOOP-13345 > > Attachments: HADOOP-13449-HADOOP-13345.000.patch, > HADOOP-13449-HADOOP-13345.001.patch, HADOOP-13449-HADOOP-13345.002.patch, > HADOOP-13449-HADOOP-13345.003.patch, HADOOP-13449-HADOOP-13345.004.patch, > HADOOP-13449-HADOOP-13345.005.patch, HADOOP-13449-HADOOP-13345.006.patch, > HADOOP-13449-HADOOP-13345.007.patch, HADOOP-13449-HADOOP-13345.008.patch, > HADOOP-13449-HADOOP-13345.009.patch, HADOOP-13449-HADOOP-13345.010.patch, > HADOOP-13449-HADOOP-13345.011.patch, HADOOP-13449-HADOOP-13345.012.patch, > HADOOP-13449-HADOOP-13345.013.patch > > > Provide an implementation of the metadata store backed by DynamoDB. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13824) FsShell can suppress the real error if no error message is present
[ https://issues.apache.org/jira/browse/HADOOP-13824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736691#comment-15736691 ] Hudson commented on HADOOP-13824: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10978 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10978/]) HADOOP-13824. FsShell can suppress the real error if no error message is (weichiu: rev b606e025f10daed18b90b45ac00cd0c82e818581) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsShell.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShell.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java > FsShell can suppress the real error if no error message is present > -- > > Key: HADOOP-13824 > URL: https://issues.apache.org/jira/browse/HADOOP-13824 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.7.1, 2.7.3 >Reporter: Rob Vesse >Assignee: John Zhuge > Labels: supportability > Attachments: HADOOP-13824.001.patch, HADOOP-13824.002.patch, > HADOOP-13824.003.patch > > > The {{FsShell}} error handling assumes in {{displayError()}} that the > {{message}} argument is not {{null}}. However in the case where it is this > leads to a NPE which results in suppressing the actual error information > since a higher level of error handling kicks in and just dumps the stack > trace of the NPE instead. > e.g. > {noformat} > Exception in thread "main" java.lang.NullPointerException > at org.apache.hadoop.fs.FsShell.displayError(FsShell.java:304) > at org.apache.hadoop.fs.FsShell.run(FsShell.java:289) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) > at org.apache.hadoop.fs.FsShell.main(FsShell.java:340) > {noformat} > This is deeply unhelpful because depending on what the underlying error was > there may be no stack dumped/logged for it (as HADOOP-7114 provides) since > {{FsShell}} doesn't explicitly dump traces for {{IllegalArgumentException}} > which appears to be the underlying cause of my issue. Line 289 is where > {{displayError()}} is called for {{IllegalArgumentException}} handling and > that catch clause does not log the error. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.
[ https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Fabbri updated HADOOP-13449: -- Resolution: Fixed Fix Version/s: HADOOP-13345 Status: Resolved (was: Patch Available) > S3Guard: Implement DynamoDBMetadataStore. > - > > Key: HADOOP-13449 > URL: https://issues.apache.org/jira/browse/HADOOP-13449 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Mingliang Liu > Fix For: HADOOP-13345 > > Attachments: HADOOP-13449-HADOOP-13345.000.patch, > HADOOP-13449-HADOOP-13345.001.patch, HADOOP-13449-HADOOP-13345.002.patch, > HADOOP-13449-HADOOP-13345.003.patch, HADOOP-13449-HADOOP-13345.004.patch, > HADOOP-13449-HADOOP-13345.005.patch, HADOOP-13449-HADOOP-13345.006.patch, > HADOOP-13449-HADOOP-13345.007.patch, HADOOP-13449-HADOOP-13345.008.patch, > HADOOP-13449-HADOOP-13345.009.patch, HADOOP-13449-HADOOP-13345.010.patch, > HADOOP-13449-HADOOP-13345.011.patch, HADOOP-13449-HADOOP-13345.012.patch, > HADOOP-13449-HADOOP-13345.013.patch > > > Provide an implementation of the metadata store backed by DynamoDB. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.
[ https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736682#comment-15736682 ] Aaron Fabbri commented on HADOOP-13449: --- I committed this to the HADOOP-13345 feature branch. Thank you for all your hard work on this [~liuml07]. I ran all integration tests against s3-us-west-2.amazonaws.com endpoint. Failures were as expected: ITestS3AFileOperationCost.testFakeDirectoryDeletion:254->Assert.assertEquals:555 ITestJets3tNativeS3FileSystemContract>NativeS3FileSystemContractBaseTest.testListStatusForRoot:66 Root directory is not empty; expected:<0> but was:<3> ITestS3AAWSCredentialsProvider.testAnonymousProvider:133 » AWSServiceIO initia... ITestS3ACredentialsInURL.testInstantiateFromURL:86 » InterruptedIO initializin... ITestS3AFileSystemContract>FileSystemContractBaseTest.testRenameToDirWithSamePrefixAllowed:669->FileSystemContractBaseTest.rename:525 » AWSServiceIO Which are covered by HADOOP-13876 and HADOOP-13886. > S3Guard: Implement DynamoDBMetadataStore. > - > > Key: HADOOP-13449 > URL: https://issues.apache.org/jira/browse/HADOOP-13449 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Mingliang Liu > Attachments: HADOOP-13449-HADOOP-13345.000.patch, > HADOOP-13449-HADOOP-13345.001.patch, HADOOP-13449-HADOOP-13345.002.patch, > HADOOP-13449-HADOOP-13345.003.patch, HADOOP-13449-HADOOP-13345.004.patch, > HADOOP-13449-HADOOP-13345.005.patch, HADOOP-13449-HADOOP-13345.006.patch, > HADOOP-13449-HADOOP-13345.007.patch, HADOOP-13449-HADOOP-13345.008.patch, > HADOOP-13449-HADOOP-13345.009.patch, HADOOP-13449-HADOOP-13345.010.patch, > HADOOP-13449-HADOOP-13345.011.patch, HADOOP-13449-HADOOP-13345.012.patch, > HADOOP-13449-HADOOP-13345.013.patch > > > Provide an implementation of the metadata store backed by DynamoDB. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13886) s3guard: ITestS3AFileOperationCost.testFakeDirectoryDeletion failure
Aaron Fabbri created HADOOP-13886: - Summary: s3guard: ITestS3AFileOperationCost.testFakeDirectoryDeletion failure Key: HADOOP-13886 URL: https://issues.apache.org/jira/browse/HADOOP-13886 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Reporter: Aaron Fabbri testFakeDirectoryDeletion(org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost) Time elapsed: 10.011 sec <<< FAILURE! java.lang.AssertionError: after rename(srcFilePath, destFilePath): directories_created expected:<1> but was:<0> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.apache.hadoop.fs.s3a.S3ATestUtils$MetricDiff.assertDiffEquals(S3ATestUtils.java:431) at org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost.testFakeDirectoryDeletion(ITestS3AFileOperationCost.java:254) More details to follow in comments. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13824) FsShell can suppress the real error if no error message is present
[ https://issues.apache.org/jira/browse/HADOOP-13824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736637#comment-15736637 ] Wei-Chiu Chuang commented on HADOOP-13824: -- Committing 003 patch. > FsShell can suppress the real error if no error message is present > -- > > Key: HADOOP-13824 > URL: https://issues.apache.org/jira/browse/HADOOP-13824 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.7.1, 2.7.3 >Reporter: Rob Vesse >Assignee: John Zhuge > Labels: supportability > Attachments: HADOOP-13824.001.patch, HADOOP-13824.002.patch, > HADOOP-13824.003.patch > > > The {{FsShell}} error handling assumes in {{displayError()}} that the > {{message}} argument is not {{null}}. However in the case where it is this > leads to a NPE which results in suppressing the actual error information > since a higher level of error handling kicks in and just dumps the stack > trace of the NPE instead. > e.g. > {noformat} > Exception in thread "main" java.lang.NullPointerException > at org.apache.hadoop.fs.FsShell.displayError(FsShell.java:304) > at org.apache.hadoop.fs.FsShell.run(FsShell.java:289) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) > at org.apache.hadoop.fs.FsShell.main(FsShell.java:340) > {noformat} > This is deeply unhelpful because depending on what the underlying error was > there may be no stack dumped/logged for it (as HADOOP-7114 provides) since > {{FsShell}} doesn't explicitly dump traces for {{IllegalArgumentException}} > which appears to be the underlying cause of my issue. Line 289 is where > {{displayError()}} is called for {{IllegalArgumentException}} handling and > that catch clause does not log the error. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13885) Implement getLinkTarget for ViewFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736613#comment-15736613 ] Hadoop QA commented on HADOOP-13885: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 11s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 48m 7s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-13885 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12842621/HADOOP-13885.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 9cd0ca421101 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 5bd7dec | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/11237/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/11237/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Implement getLinkTarget for ViewFileSystem > -- > > Key: HADOOP-13885 > URL: https://issues.apache.org/jira/browse/HADOOP-13885 > Project: Hadoop Common > Issue Type: Task > Components: viewfs >Affects Versions: 3.0.0-alpha1 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy > Attachments: HADOOP-13885.01.patch > > > ViewFileSystem doesn't override FileSystem#getLinkTarget(). So, when view > filesystem is used to resolve the symbolic links, the default FileSystem >
[jira] [Updated] (HADOOP-13885) Implement getLinkTarget for ViewFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manoj Govindassamy updated HADOOP-13885: Status: Patch Available (was: Open) > Implement getLinkTarget for ViewFileSystem > -- > > Key: HADOOP-13885 > URL: https://issues.apache.org/jira/browse/HADOOP-13885 > Project: Hadoop Common > Issue Type: Task > Components: viewfs >Affects Versions: 3.0.0-alpha1 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy > Attachments: HADOOP-13885.01.patch > > > ViewFileSystem doesn't override FileSystem#getLinkTarget(). So, when view > filesystem is used to resolve the symbolic links, the default FileSystem > implementation throws UnsupportedOperationException. > The proposal is to define getLinkTarget() for ViewFileSystem and invoke the > target FileSystem for resolving the symbolic links. Path thus returned is > preferred to be a viewfs qualified path, so that it can be used again on the > ViewFileSystem handle. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13885) Implement getLinkTarget for ViewFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manoj Govindassamy updated HADOOP-13885: Attachment: HADOOP-13885.01.patch Attached a patch which defines getLinkTarget() for ViewFileSystem along with testcases. [~andrew.wang], please take a look at the patch. > Implement getLinkTarget for ViewFileSystem > -- > > Key: HADOOP-13885 > URL: https://issues.apache.org/jira/browse/HADOOP-13885 > Project: Hadoop Common > Issue Type: Task > Components: viewfs >Affects Versions: 3.0.0-alpha1 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy > Attachments: HADOOP-13885.01.patch > > > ViewFileSystem doesn't override FileSystem#getLinkTarget(). So, when view > filesystem is used to resolve the symbolic links, the default FileSystem > implementation throws UnsupportedOperationException. > The proposal is to define getLinkTarget() for ViewFileSystem and invoke the > target FileSystem for resolving the symbolic links. Path thus returned is > preferred to be a viewfs qualified path, so that it can be used again on the > ViewFileSystem handle. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13597) Switch KMS from Tomcat to Jetty
[ https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736470#comment-15736470 ] Robert Kanter commented on HADOOP-13597: A few additional comments: # I don't know if {{getPasswordString}} is a good idea. Won't that just make things confusing for users? They try to set a password in the config, but it ends up being null (probably NPE?) instead of throwing an IOE about not finding the password. The latter would be more clear what the problem is. # Should we mark {{HttpServer2#HTTP_MAX_THREADS}} as {{\@deprecated}}? # Not you're doing, but typo in {{HttpServer2}} comment: {{explicitly destroy the secrete provider}} > Switch KMS from Tomcat to Jetty > --- > > Key: HADOOP-13597 > URL: https://issues.apache.org/jira/browse/HADOOP-13597 > Project: Hadoop Common > Issue Type: New Feature > Components: kms >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-13597.001.patch, HADOOP-13597.002.patch, > HADOOP-13597.003.patch > > > The Tomcat 6 we are using will reach EOL at the end of 2017. While there are > other good options, I would propose switching to {{Jetty 9}} for the > following reasons: > * Easier migration. Both Tomcat and Jetty are based on {{Servlet > Containers}}, so we don't have change client code that much. It would require > more work to switch to {{JAX-RS}}. > * Well established. > * Good performance and scalability. > Other alternatives: > * Jersey + Grizzly > * Tomcat 8 > Your opinions will be greatly appreciated. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13885) Implement getLinkTarget for ViewFileSystem
Manoj Govindassamy created HADOOP-13885: --- Summary: Implement getLinkTarget for ViewFileSystem Key: HADOOP-13885 URL: https://issues.apache.org/jira/browse/HADOOP-13885 Project: Hadoop Common Issue Type: Task Components: viewfs Affects Versions: 3.0.0-alpha1 Reporter: Manoj Govindassamy Assignee: Manoj Govindassamy ViewFileSystem doesn't override FileSystem#getLinkTarget(). So, when view filesystem is used to resolve the symbolic links, the default FileSystem implementation throws UnsupportedOperationException. The proposal is to define getLinkTarget() for ViewFileSystem and invoke the target FileSystem for resolving the symbolic links. Path thus returned is preferred to be a viewfs qualified path, so that it can be used again on the ViewFileSystem handle. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13881) Remove deprecated APIs added in HADOOP-6709
[ https://issues.apache.org/jira/browse/HADOOP-13881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736196#comment-15736196 ] Andrew Wang commented on HADOOP-13881: -- I see [~ste...@apache.org] put a -1 on HDFS-11228 for similar reasons. > Remove deprecated APIs added in HADOOP-6709 > --- > > Key: HADOOP-13881 > URL: https://issues.apache.org/jira/browse/HADOOP-13881 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-13881.01.patch > > > FileSystem#getName, getNamed, getReplication(Path), delete(Path), > getLength(Path), getBlockSize(Path) were re-instated about 6 years ago. They > can be removed in Hadoop 3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13881) Remove deprecated APIs added in HADOOP-6709
[ https://issues.apache.org/jira/browse/HADOOP-13881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736174#comment-15736174 ] Andrew Wang commented on HADOOP-13881: -- Hi Akira, could you comment on downstream usage of these APIs? These are pretty easy for us to keep supporting, so there isn't that much upside from removing them. > Remove deprecated APIs added in HADOOP-6709 > --- > > Key: HADOOP-13881 > URL: https://issues.apache.org/jira/browse/HADOOP-13881 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-13881.01.patch > > > FileSystem#getName, getNamed, getReplication(Path), delete(Path), > getLength(Path), getBlockSize(Path) were re-instated about 6 years ago. They > can be removed in Hadoop 3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13852) hadoop build to allow hadoop version property to be explicitly set
[ https://issues.apache.org/jira/browse/HADOOP-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736159#comment-15736159 ] Konstantinos Karanasos commented on HADOOP-13852: - Thanks, [~ste...@apache.org] and [~ajisakaa], for taking care of this. > hadoop build to allow hadoop version property to be explicitly set > -- > > Key: HADOOP-13852 > URL: https://issues.apache.org/jira/browse/HADOOP-13852 > Project: Hadoop Common > Issue Type: New Feature > Components: build >Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13582-003.patch, HADOOP-13852-001.patch, > HADOOP-13852-002.patch > > > Hive (and transitively) Spark, won't start on Hadoop 3.x as the shim layer > rejects Hadoop v3. As a workaround pending a Hive fix, allow the build to > have the Hadoop version (currently set to pom.version) to be overridden > manually. > This will not affect version names of artifacts, merely the declared Hadoop > version visible in {{VersionInfo.getVersion()}} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13883) Add description of -fs option in generic command usage
[ https://issues.apache.org/jira/browse/HADOOP-13883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735912#comment-15735912 ] Allen Wittenauer commented on HADOOP-13883: --- It's not listed there because not everything supports -fs. Also, why wasn't this alphabetized? > Add description of -fs option in generic command usage > -- > > Key: HADOOP-13883 > URL: https://issues.apache.org/jira/browse/HADOOP-13883 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13883.001.patch > > > Currently the description of option '-fs' is missing in generic command > usage in documentation {{CommandManual.md}}. And the users won't know to use > this option, while this option already makes sense to {{hdfs dfsadmin}}, > {{hdfs fsck}}, etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-11859) PseudoAuthenticationHandler fails with httpcomponents v4.4
[ https://issues.apache.org/jira/browse/HADOOP-11859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735805#comment-15735805 ] Kihwal Lee edited comment on HADOOP-11859 at 12/9/16 5:06 PM: -- cherry-picked to branch-2.7. was (Author: kihwal): cherry-pick to branch-2.7. > PseudoAuthenticationHandler fails with httpcomponents v4.4 > -- > > Key: HADOOP-11859 > URL: https://issues.apache.org/jira/browse/HADOOP-11859 > Project: Hadoop Common > Issue Type: Bug >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1 > > Attachments: HADOOP-11859.patch > > > This shows in the context of WebHCat and Hive (which recently moved to > httpcomponents:httpclient:4.4) but could happen in other places. > URLEncodedUtils.parse(String, Charset) which is called from > PseudoAuthenticationHandler.getUserName() with the 1st argument produced by > HttpServletRequest.getQueryString(). > The later returns NULL if there is no query string in the URL. > in httpcoponents:httpclient:4.2.5 parse() gracefully handles first argument > being NULL, but in 4.4 it NPEs. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-11859) PseudoAuthenticationHandler fails with httpcomponents v4.4
[ https://issues.apache.org/jira/browse/HADOOP-11859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735805#comment-15735805 ] Kihwal Lee edited comment on HADOOP-11859 at 12/9/16 5:06 PM: -- cherry-picked to branch-2.7. was (Author: kihwal): cherry-picked to branch-2.7. > PseudoAuthenticationHandler fails with httpcomponents v4.4 > -- > > Key: HADOOP-11859 > URL: https://issues.apache.org/jira/browse/HADOOP-11859 > Project: Hadoop Common > Issue Type: Bug >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1 > > Attachments: HADOOP-11859.patch > > > This shows in the context of WebHCat and Hive (which recently moved to > httpcomponents:httpclient:4.4) but could happen in other places. > URLEncodedUtils.parse(String, Charset) which is called from > PseudoAuthenticationHandler.getUserName() with the 1st argument produced by > HttpServletRequest.getQueryString(). > The later returns NULL if there is no query string in the URL. > in httpcoponents:httpclient:4.2.5 parse() gracefully handles first argument > being NULL, but in 4.4 it NPEs. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11859) PseudoAuthenticationHandler fails with httpcomponents v4.4
[ https://issues.apache.org/jira/browse/HADOOP-11859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735805#comment-15735805 ] Kihwal Lee commented on HADOOP-11859: - cherry-pick to branch-2.7. > PseudoAuthenticationHandler fails with httpcomponents v4.4 > -- > > Key: HADOOP-11859 > URL: https://issues.apache.org/jira/browse/HADOOP-11859 > Project: Hadoop Common > Issue Type: Bug >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1 > > Attachments: HADOOP-11859.patch > > > This shows in the context of WebHCat and Hive (which recently moved to > httpcomponents:httpclient:4.4) but could happen in other places. > URLEncodedUtils.parse(String, Charset) which is called from > PseudoAuthenticationHandler.getUserName() with the 1st argument produced by > HttpServletRequest.getQueryString(). > The later returns NULL if there is no query string in the URL. > in httpcoponents:httpclient:4.2.5 parse() gracefully handles first argument > being NULL, but in 4.4 it NPEs. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-11859) PseudoAuthenticationHandler fails with httpcomponents v4.4
[ https://issues.apache.org/jira/browse/HADOOP-11859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee updated HADOOP-11859: Fix Version/s: 2.7.4 > PseudoAuthenticationHandler fails with httpcomponents v4.4 > -- > > Key: HADOOP-11859 > URL: https://issues.apache.org/jira/browse/HADOOP-11859 > Project: Hadoop Common > Issue Type: Bug >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1 > > Attachments: HADOOP-11859.patch > > > This shows in the context of WebHCat and Hive (which recently moved to > httpcomponents:httpclient:4.4) but could happen in other places. > URLEncodedUtils.parse(String, Charset) which is called from > PseudoAuthenticationHandler.getUserName() with the 1st argument produced by > HttpServletRequest.getQueryString(). > The later returns NULL if there is no query string in the URL. > in httpcoponents:httpclient:4.2.5 parse() gracefully handles first argument > being NULL, but in 4.4 it NPEs. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12767) update apache httpclient version to 4.5.2; httpcore to 4.4.4
[ https://issues.apache.org/jira/browse/HADOOP-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735734#comment-15735734 ] Kihwal Lee commented on HADOOP-12767: - Can we pull in HADOOP-11859 to 2.7? What else is preventing this from going to 2.7? > update apache httpclient version to 4.5.2; httpcore to 4.4.4 > > > Key: HADOOP-12767 > URL: https://issues.apache.org/jira/browse/HADOOP-12767 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.7.2 >Reporter: Artem Aliev >Assignee: Artem Aliev > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HADOOP-12767-branch-2-005.patch, > HADOOP-12767-branch-2.004.patch, HADOOP-12767-branch-2.005.patch, > HADOOP-12767.001.patch, HADOOP-12767.002.patch, HADOOP-12767.003.patch, > HADOOP-12767.004.patch > > > Various SSL security fixes are needed. See: CVE-2012-6153, CVE-2011-4461, > CVE-2014-3577, CVE-2015-5262. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13868) New defaults for S3A multi-part configuration
[ https://issues.apache.org/jira/browse/HADOOP-13868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735690#comment-15735690 ] Hadoop QA commented on HADOOP-13868: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 26s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 34s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 37s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 77m 9s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-13868 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12842566/HADOOP-13868.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle | | uname | Linux dfd8afad53d8 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 80b8023 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/11236/testReport/ | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/11236/console | | Powered by | Apache Yetus
[jira] [Commented] (HADOOP-13868) New defaults for S3A multi-part configuration
[ https://issues.apache.org/jira/browse/HADOOP-13868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735529#comment-15735529 ] Sean Mackrory commented on HADOOP-13868: {quote}128MB seems a reasonable increase{quote} Just to be clear, it's a decrease. I was mistaken about what the previous defaults were in trunk. But the current value is also significantly sub-optimal (at least in all the US regions I tested, despite significantly varying raw performance between them). > New defaults for S3A multi-part configuration > - > > Key: HADOOP-13868 > URL: https://issues.apache.org/jira/browse/HADOOP-13868 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.7.0, 3.0.0-alpha1 >Reporter: Sean Mackrory >Assignee: Sean Mackrory > Attachments: HADOOP-13868.001.patch, HADOOP-13868.002.patch, > optimizing-multipart-s3a.sh > > > I've been looking at a big performance regression when writing to S3 from > Spark that appears to have been introduced with HADOOP-12891. > In the Amazon SDK, the default threshold for multi-part copies is 320x the > threshold for multi-part uploads (and the block size is 20x bigger), so I > don't think it's necessarily wise for us to have them be the same. > I did some quick tests and it seems to me the sweet spot when multi-part > copies start being faster is around 512MB. It wasn't as significant, but > using 104857600 (Amazon's default) for the blocksize was also slightly better. > I propose we do the following, although they're independent decisions: > (1) Split the configuration. Ideally, I'd like to have > fs.s3a.multipart.copy.threshold and fs.s3a.multipart.upload.threshold (and > corresponding properties for the block size). But then there's the question > of what to do with the existing fs.s3a.multipart.* properties. Deprecation? > Leave it as a short-hand for configuring both (that's overridden by the more > specific properties?). > (2) Consider increasing the default values. In my tests, 256 MB seemed to be > where multipart uploads came into their own, and 512 MB was where multipart > copies started outperforming the alternative. Would be interested to hear > what other people have seen. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13868) New defaults for S3A multi-part configuration
[ https://issues.apache.org/jira/browse/HADOOP-13868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Mackrory updated HADOOP-13868: --- Attachment: HADOOP-13868.002.patch Attaching a patch ising the M suffix where possible. I thought it would be cool to tweak things so the Constants.java values could also be in that format and they could be consistent everywhere, but that requires changing a bunch of functions used elsewhere to accept Strings. Probably not worth it. > New defaults for S3A multi-part configuration > - > > Key: HADOOP-13868 > URL: https://issues.apache.org/jira/browse/HADOOP-13868 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.7.0, 3.0.0-alpha1 >Reporter: Sean Mackrory >Assignee: Sean Mackrory > Attachments: HADOOP-13868.001.patch, HADOOP-13868.002.patch, > optimizing-multipart-s3a.sh > > > I've been looking at a big performance regression when writing to S3 from > Spark that appears to have been introduced with HADOOP-12891. > In the Amazon SDK, the default threshold for multi-part copies is 320x the > threshold for multi-part uploads (and the block size is 20x bigger), so I > don't think it's necessarily wise for us to have them be the same. > I did some quick tests and it seems to me the sweet spot when multi-part > copies start being faster is around 512MB. It wasn't as significant, but > using 104857600 (Amazon's default) for the blocksize was also slightly better. > I propose we do the following, although they're independent decisions: > (1) Split the configuration. Ideally, I'd like to have > fs.s3a.multipart.copy.threshold and fs.s3a.multipart.upload.threshold (and > corresponding properties for the block size). But then there's the question > of what to do with the existing fs.s3a.multipart.* properties. Deprecation? > Leave it as a short-hand for configuring both (that's overridden by the more > specific properties?). > (2) Consider increasing the default values. In my tests, 256 MB seemed to be > where multipart uploads came into their own, and 512 MB was where multipart > copies started outperforming the alternative. Would be interested to hear > what other people have seen. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13852) hadoop build to allow hadoop version property to be explicitly set
[ https://issues.apache.org/jira/browse/HADOOP-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735486#comment-15735486 ] Hadoop QA commented on HADOOP-13852: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 13s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 23s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 33s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 55m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-13852 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12842557/HADOOP-13582-003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux 45af14786fe9 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 80b8023 | | Default Java | 1.8.0_111 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/11235/testReport/ | | modules | C: hadoop-project hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/11235/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > hadoop build to allow hadoop version property to be explicitly set > -- > > Key: HADOOP-13852 > URL: https://issues.apache.org/jira/browse/HADOOP-13852 > Project: Hadoop Common >
[jira] [Commented] (HADOOP-13345) S3Guard: Improved Consistency for S3A
[ https://issues.apache.org/jira/browse/HADOOP-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735396#comment-15735396 ] Steve Loughran commented on HADOOP-13345: - yeah, but the patch has been reverted from trunk as it broke a small bit of YARN. Once the final patch is in I'll merge up again. > S3Guard: Improved Consistency for S3A > - > > Key: HADOOP-13345 > URL: https://issues.apache.org/jira/browse/HADOOP-13345 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Attachments: HADOOP-13345.prototype1.patch, > S3C-ConsistentListingonS3-Design.pdf, S3GuardImprovedConsistencyforS3A.pdf, > S3GuardImprovedConsistencyforS3AV2.pdf, s3c.001.patch > > > This issue proposes S3Guard, a new feature of S3A, to provide an option for a > stronger consistency model than what is currently offered. The solution > coordinates with a strongly consistent external store to resolve > inconsistencies caused by the S3 eventual consistency model. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13883) Add description of -fs option in generic command usage
[ https://issues.apache.org/jira/browse/HADOOP-13883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-13883: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) Committed to trunk,branch-2 and branch-2.8..[~linyiqun] thanks for contribution..Hope it should be fine as it should go to branch-2 and branc-2.8.. > Add description of -fs option in generic command usage > -- > > Key: HADOOP-13883 > URL: https://issues.apache.org/jira/browse/HADOOP-13883 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13883.001.patch > > > Currently the description of option '-fs' is missing in generic command > usage in documentation {{CommandManual.md}}. And the users won't know to use > this option, while this option already makes sense to {{hdfs dfsadmin}}, > {{hdfs fsck}}, etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13852) hadoop build to allow hadoop version property to be explicitly set
[ https://issues.apache.org/jira/browse/HADOOP-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13852: Status: Patch Available (was: Reopened) > hadoop build to allow hadoop version property to be explicitly set > -- > > Key: HADOOP-13852 > URL: https://issues.apache.org/jira/browse/HADOOP-13852 > Project: Hadoop Common > Issue Type: New Feature > Components: build >Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13582-003.patch, HADOOP-13852-001.patch, > HADOOP-13852-002.patch > > > Hive (and transitively) Spark, won't start on Hadoop 3.x as the shim layer > rejects Hadoop v3. As a workaround pending a Hive fix, allow the build to > have the Hadoop version (currently set to pom.version) to be overridden > manually. > This will not affect version names of artifacts, merely the declared Hadoop > version visible in {{VersionInfo.getVersion()}} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13852) hadoop build to allow hadoop version property to be explicitly set
[ https://issues.apache.org/jira/browse/HADOOP-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13852: Attachment: HADOOP-13582-003.patch > hadoop build to allow hadoop version property to be explicitly set > -- > > Key: HADOOP-13852 > URL: https://issues.apache.org/jira/browse/HADOOP-13852 > Project: Hadoop Common > Issue Type: New Feature > Components: build >Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13582-003.patch, HADOOP-13852-001.patch, > HADOOP-13852-002.patch > > > Hive (and transitively) Spark, won't start on Hadoop 3.x as the shim layer > rejects Hadoop v3. As a workaround pending a Hive fix, allow the build to > have the Hadoop version (currently set to pom.version) to be overridden > manually. > This will not affect version names of artifacts, merely the declared Hadoop > version visible in {{VersionInfo.getVersion()}} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13852) hadoop build to allow hadoop version property to be explicitly set
[ https://issues.apache.org/jira/browse/HADOOP-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735336#comment-15735336 ] Steve Loughran commented on HADOOP-13852: - Patch 003 # moves variable set to {{hadoop-project/pom.xml}} # verifies that {{yarn-common/target/classes/version-info.properties}} has the version data # verifies that TestRMWebServicesNodes passes > hadoop build to allow hadoop version property to be explicitly set > -- > > Key: HADOOP-13852 > URL: https://issues.apache.org/jira/browse/HADOOP-13852 > Project: Hadoop Common > Issue Type: New Feature > Components: build >Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13852-001.patch, HADOOP-13852-002.patch > > > Hive (and transitively) Spark, won't start on Hadoop 3.x as the shim layer > rejects Hadoop v3. As a workaround pending a Hive fix, allow the build to > have the Hadoop version (currently set to pom.version) to be overridden > manually. > This will not affect version names of artifacts, merely the declared Hadoop > version visible in {{VersionInfo.getVersion()}} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13824) FsShell can suppress the real error if no error message is present
[ https://issues.apache.org/jira/browse/HADOOP-13824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735330#comment-15735330 ] Steve Loughran commented on HADOOP-13824: - LGTM +1 Filed a JIRA on the LTU test failure > FsShell can suppress the real error if no error message is present > -- > > Key: HADOOP-13824 > URL: https://issues.apache.org/jira/browse/HADOOP-13824 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.7.1, 2.7.3 >Reporter: Rob Vesse >Assignee: John Zhuge > Labels: supportability > Attachments: HADOOP-13824.001.patch, HADOOP-13824.002.patch, > HADOOP-13824.003.patch > > > The {{FsShell}} error handling assumes in {{displayError()}} that the > {{message}} argument is not {{null}}. However in the case where it is this > leads to a NPE which results in suppressing the actual error information > since a higher level of error handling kicks in and just dumps the stack > trace of the NPE instead. > e.g. > {noformat} > Exception in thread "main" java.lang.NullPointerException > at org.apache.hadoop.fs.FsShell.displayError(FsShell.java:304) > at org.apache.hadoop.fs.FsShell.run(FsShell.java:289) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) > at org.apache.hadoop.fs.FsShell.main(FsShell.java:340) > {noformat} > This is deeply unhelpful because depending on what the underlying error was > there may be no stack dumped/logged for it (as HADOOP-7114 provides) since > {{FsShell}} doesn't explicitly dump traces for {{IllegalArgumentException}} > which appears to be the underlying cause of my issue. Line 289 is where > {{displayError()}} is called for {{IllegalArgumentException}} handling and > that catch clause does not log the error. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13881) Remove deprecated APIs added in HADOOP-6709
[ https://issues.apache.org/jira/browse/HADOOP-13881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735292#comment-15735292 ] Hadoop QA commented on HADOOP-13881: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 7m 1s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 7m 1s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 36s{color} | {color:green} root: The patch generated 0 new + 216 unchanged - 2 fixed = 216 total (was 218) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 25s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 9s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 38m 49s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}121m 36s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer | | | hadoop.yarn.server.resourcemanager.TestRMRestart | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-13881 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12842535/HADOOP-13881.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 24401c9deb59 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d1d4aba | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | compile | https://builds.apache.org/job/PreCommit-HADOOP-Build/11233/artifact/patchprocess/patch-compile-root.txt | | javac |
[jira] [Created] (HADOOP-13884) s3a create(overwrite=true) to only look for dir/ and list entries, not file
Steve Loughran created HADOOP-13884: --- Summary: s3a create(overwrite=true) to only look for dir/ and list entries, not file Key: HADOOP-13884 URL: https://issues.apache.org/jira/browse/HADOOP-13884 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 2.9.0 Reporter: Steve Loughran Priority: Minor before doing a create(), s3a does a getFileStatus() to make sure there isn't a directory there, and, if overwrite=false, that there isn't a file. Because S3 caches negative HEAD/GET requests, if there isn't a file, then even after the PUT, a later GET/HEAD may return 404; we are generating create consistency where none need exist. when overwrite=true we don't care whether the file exists or not, only that the path isn't a directory. So we can just to the HEAD path +"/' and the LIST calls, skipping the {{HEAD path}}. This will save an HTTP round trip of a few hundred millis, and ensure that there's no 404 cached in the S3 front end for later callers -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13883) Add description of -fs option in generic command usage
[ https://issues.apache.org/jira/browse/HADOOP-13883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-13883: -- Fix Version/s: 3.0.0-alpha2 > Add description of -fs option in generic command usage > -- > > Key: HADOOP-13883 > URL: https://issues.apache.org/jira/browse/HADOOP-13883 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13883.001.patch > > > Currently the description of option '-fs' is missing in generic command > usage in documentation {{CommandManual.md}}. And the users won't know to use > this option, while this option already makes sense to {{hdfs dfsadmin}}, > {{hdfs fsck}}, etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13883) Add description of -fs option in generic command usage
[ https://issues.apache.org/jira/browse/HADOOP-13883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-13883: -- Affects Version/s: (was: 3.0.0-alpha2) > Add description of -fs option in generic command usage > -- > > Key: HADOOP-13883 > URL: https://issues.apache.org/jira/browse/HADOOP-13883 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HADOOP-13883.001.patch > > > Currently the description of option '-fs' is missing in generic command > usage in documentation {{CommandManual.md}}. And the users won't know to use > this option, while this option already makes sense to {{hdfs dfsadmin}}, > {{hdfs fsck}}, etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13879) Remove deprecated FileSystem#getDefault* and getServerDefault methods that don't take a Path argument
[ https://issues.apache.org/jira/browse/HADOOP-13879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735218#comment-15735218 ] Steve Loughran commented on HADOOP-13879: - -1 # There's almost no cost in these methods, as they return the defaults. It's mainly in viewFS that you may have different defaults. # If you look at implementations {{getServerDefaults(Path)}} defaults to calling {{getServerDefaults()}} —which is the implementation provided by most of the filesystem implementations. # remove that base method and it potentially breaks every FS implementation, who will have to now implement {{getServerDefaults(Path)}}, and, if they did implement {{getServerDefaults()}}, either remove it, or at least remove any {{@Override}} marker. This is not just going to cause problems client-side, it will break those implementations. (moving to HDFS as it's an FS API issue, which makes it their responsibility) > Remove deprecated FileSystem#getDefault* and getServerDefault methods that > don't take a Path argument > -- > > Key: HADOOP-13879 > URL: https://issues.apache.org/jira/browse/HADOOP-13879 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-13879.01.patch, HADOOP-13879.02.patch, > HADOOP-13879.03.patch, HADOOP-13879.04.patch > > > FileSystem#getServerDefaults(), #getDefaultReplication, #getDefaultBlockSize > were deprecated by HADOOP-8422 and the fix version is 2.0.2-alpha. They can > be removed in Hadoop 3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13883) Add description of -fs option in generic command usage
[ https://issues.apache.org/jira/browse/HADOOP-13883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735213#comment-15735213 ] Brahma Reddy Battula commented on HADOOP-13883: --- [~linyiqun] thanks for reporting this..LGTM..will commit..I feel,this should goto branc-2.8 and branch-2 also ,but you mentioned affter version as 3.0.0-alpaha2,any specific reasion..? > Add description of -fs option in generic command usage > -- > > Key: HADOOP-13883 > URL: https://issues.apache.org/jira/browse/HADOOP-13883 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 3.0.0-alpha2 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HADOOP-13883.001.patch > > > Currently the description of option '-fs' is missing in generic command > usage in documentation {{CommandManual.md}}. And the users won't know to use > this option, while this option already makes sense to {{hdfs dfsadmin}}, > {{hdfs fsck}}, etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13871) ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks performance on branch-2.8 awful
[ https://issues.apache.org/jira/browse/HADOOP-13871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733129#comment-15733129 ] Steve Loughran edited comment on HADOOP-13871 at 12/9/16 11:51 AM: --- Also now seen on trunk. netstat shows the link is up, {code} tcp4 0 0 192.168.1.12.55256 s3-us-west-2-r-w.https ESTABLISHED {code} and nettop shows inaction, though the rx_ooo counter seemed be incrementing at2 2KB/s for a bit, before hanging completely {code} state packets_in bytes_in packets_out bytes_out rx_duperx_ooo re-tx rtt_avg rtt_var rcvsizetx_win P C R W java.16828 24502 13 MiB8 3507 B 37 KiB 4654 KiB 0 B tcp4 192.168.1.12:55256<->s3-us-west-2-r-w.amazonaws.com:443 Established 24502 13 MiB8 3507 B 37 KiB 4654 KiB 0 B 185.31 ms 15.03 ms 256 KiB21 KiB - - - - {code} That's 4MB of OOO packets for 13 MB read, symptomatic of routing fun. Then, suddenly, that TCP connection got closed (socket timeout) and a new one opened that went through the full dataset in a second or two {code} state packets_inbytes_in packets_out bytes_out rx_duperx_ooo re-tx rtt_avg rtt_var rcvsizetx_win P C R W java.16828 41636 37 MiB 25 9210 B 37 KiB 4654 KiB 0 B tcp4 192.168.1.12:55256<->s3-us-west-2-r-w.amazonaws.com:443 FinWait2 24502 13 MiB 9 3560 B 37 KiB 4654 KiB 0 B 184.16 ms 12.44 ms 256 KiB21 KiB - - - - {code} The really good news: curl is now suffering too. Which means its not a Java problem. Either the latop (which has been rebooted with SMC reset), or the rest of the network. {code} $ curl -O https://landsat-pds.s3.amazonaws.com/scene_list.gz % Total% Received % Xferd Average Speed TimeTime Time Current Dload Upload Total SpentLeft Speed 5 37.4M5 2090k0 0 11824 0 0:55:21 0:03:01 0:52:20 7039 $ nettop -p 17105 state packets_inbytes_in packets_out bytes_out rx_duperx_ooo re-tx rtt_avg rtt_var rcvsizetx_win P C R W curl.17105 31782323 KiB 4 482 B 10232 B 918 KiB 0 B tcp4 192.168.1.12:55731<->s3-us-west-2-w.amazonaws.com:443 Established31782323 KiB 4 482 B 10232 B 918 KiB 0 B 173.56 ms 20.41 ms 256 KiB16 KiB - - - - {code} And on another attempt {code} curl -O https://landsat-pds.s3.amazonaws.com/scene_list.gz % Total% Received % Xferd Average Speed TimeTime Time Current Dload Upload Total SpentLeft Speed 100 37.4M 100 37.4M0 0 4410k 0 0:00:08 0:00:08 --:--:-- 6382k {code} Conclusions # sometimes over a network, we can get awful S3 read performance # which goes away on a reconnect, including those detected by socket timeouts # and which can be seen on other processes, so it's not a JVM/SDK problem # which means that curl can be used as a probe independent of everything else; nettop giving more details I'm going to try to set some more aggressive socket timeouts than 200 seconds. If it does address this, maybe we should consider having a smaller default. Also: time for that advanced troubleshooting document was (Author: ste...@apache.org): Also now seen on trunk. netstat shows the link is up, {code} tcp4 0 0 192.168.1.12.55256 s3-us-west-2-r-w.https ESTABLISHED {code} and nettop shows inaction, though the rx_ooo counter seemed be incrementing at2 2KB/s for a bit, before hanging completely {code} state packets_inbytes_in packets_out bytes_out rx_duperx_ooo re-tx rtt_avg rtt_var rcvsizetx_win P C R W java.16828 24502 13 MiB 8 3507 B 37 KiB 4654 KiB 0 B tcp4 192.168.1.12:55256<->s3-us-west-2-r-w.amazonaws.com:443 Established 24502
[jira] [Commented] (HADOOP-13883) Add description of -fs option in generic command usage
[ https://issues.apache.org/jira/browse/HADOOP-13883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735111#comment-15735111 ] Hadoop QA commented on HADOOP-13883: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 10m 27s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-13883 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12842537/HADOOP-13883.001.patch | | Optional Tests | asflicense mvnsite | | uname | Linux cdd424499080 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d1d4aba | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/11234/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add description of -fs option in generic command usage > -- > > Key: HADOOP-13883 > URL: https://issues.apache.org/jira/browse/HADOOP-13883 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 3.0.0-alpha2 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HADOOP-13883.001.patch > > > Currently the description of option '-fs' is missing in generic command > usage in documentation {{CommandManual.md}}. And the users won't know to use > this option, while this option already makes sense to {{hdfs dfsadmin}}, > {{hdfs fsck}}, etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13883) Add description of -fs option in generic command usage
[ https://issues.apache.org/jira/browse/HADOOP-13883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HADOOP-13883: --- Status: Patch Available (was: Open) Patch attached. > Add description of -fs option in generic command usage > -- > > Key: HADOOP-13883 > URL: https://issues.apache.org/jira/browse/HADOOP-13883 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 3.0.0-alpha2 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HADOOP-13883.001.patch > > > Currently the description of option '-fs' is missing in generic command > usage in documentation {{CommandManual.md}}. And the users won't know to use > this option, while this option already makes sense to {{hdfs dfsadmin}}, > {{hdfs fsck}}, etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13883) Add description of -fs option in generic command usage
[ https://issues.apache.org/jira/browse/HADOOP-13883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HADOOP-13883: --- Attachment: HADOOP-13883.001.patch > Add description of -fs option in generic command usage > -- > > Key: HADOOP-13883 > URL: https://issues.apache.org/jira/browse/HADOOP-13883 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 3.0.0-alpha2 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HADOOP-13883.001.patch > > > Currently the description of option '-fs' is missing in generic command > usage in documentation {{CommandManual.md}}. And the users won't know to use > this option, while this option already makes sense to {{hdfs dfsadmin}}, > {{hdfs fsck}}, etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13883) Add description of -fs option in generic command usage
Yiqun Lin created HADOOP-13883: -- Summary: Add description of -fs option in generic command usage Key: HADOOP-13883 URL: https://issues.apache.org/jira/browse/HADOOP-13883 Project: Hadoop Common Issue Type: Bug Components: documentation Affects Versions: 3.0.0-alpha2 Reporter: Yiqun Lin Assignee: Yiqun Lin Priority: Minor Currently the description of option '-fs' is missing in generic command usage in documentation {{CommandManual.md}}. And the users won't know to use this option, while this option already makes sense to {{hdfs dfsadmin}}, {{hdfs fsck}}, etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13882) TestLambdaTestUtils.testAwaitAlwaysFalse failing
[ https://issues.apache.org/jira/browse/HADOOP-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735044#comment-15735044 ] Steve Loughran commented on HADOOP-13882: - Looks like it's a time issue: the test run took too long and the timeout kicked in before enough retries had run. Maybe crank back to count > 2 (to imply more than one iteration), and for the exception to be more meaningful {code} @Test public void testAwaitAlwaysFalse() throws Throwable { try { await(TIMEOUT, ALWAYS_FALSE, retry, TIMEOUT_FAILURE_HANDLER); fail("should not have got here"); } catch (TimeoutException e) { assertTrue(retry.getInvocationCount() > 4); } } {code} > TestLambdaTestUtils.testAwaitAlwaysFalse failing > > > Key: HADOOP-13882 > URL: https://issues.apache.org/jira/browse/HADOOP-13882 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 3.0.0-alpha2 >Reporter: Steve Loughran > > {{org.apache.hadoop.test.TestLambdaTestUtils.testAwaitAlwaysFalse}} failing > on a Jenkins test run. No obvious info in the test as to why. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-13882) TestLambdaTestUtils.testAwaitAlwaysFalse failing
[ https://issues.apache.org/jira/browse/HADOOP-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-13882: --- Assignee: Steve Loughran > TestLambdaTestUtils.testAwaitAlwaysFalse failing > > > Key: HADOOP-13882 > URL: https://issues.apache.org/jira/browse/HADOOP-13882 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 3.0.0-alpha2 >Reporter: Steve Loughran >Assignee: Steve Loughran > > {{org.apache.hadoop.test.TestLambdaTestUtils.testAwaitAlwaysFalse}} failing > on a Jenkins test run. No obvious info in the test as to why. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13882) TestLambdaTestUtils.testAwaitAlwaysFalse failing
[ https://issues.apache.org/jira/browse/HADOOP-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735039#comment-15735039 ] Steve Loughran commented on HADOOP-13882: - {code} java.lang.AssertionError: null at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.apache.hadoop.test.TestLambdaTestUtils.testAwaitAlwaysFalse(TestLambdaTestUtils.java:143) {code} > TestLambdaTestUtils.testAwaitAlwaysFalse failing > > > Key: HADOOP-13882 > URL: https://issues.apache.org/jira/browse/HADOOP-13882 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 3.0.0-alpha2 >Reporter: Steve Loughran > > {{org.apache.hadoop.test.TestLambdaTestUtils.testAwaitAlwaysFalse}} failing > on a Jenkins test run. No obvious info in the test as to why. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13882) TestLambdaTestUtils.testAwaitAlwaysFalse failing
Steve Loughran created HADOOP-13882: --- Summary: TestLambdaTestUtils.testAwaitAlwaysFalse failing Key: HADOOP-13882 URL: https://issues.apache.org/jira/browse/HADOOP-13882 Project: Hadoop Common Issue Type: Bug Components: test Affects Versions: 3.0.0-alpha2 Reporter: Steve Loughran {{org.apache.hadoop.test.TestLambdaTestUtils.testAwaitAlwaysFalse}} failing on a Jenkins test run. No obvious info in the test as to why. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13881) Remove deprecated APIs added in HADOOP-6709
[ https://issues.apache.org/jira/browse/HADOOP-13881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-13881: --- Hadoop Flags: Incompatible change Release Note: Removed deprecated FileSystem#getName, getNamed, getReplication(Path), delete(Path), getLength(Path), getBlockSize(Path). Use the alternatives documented in the javadoc of Hadoop 2.x. > Remove deprecated APIs added in HADOOP-6709 > --- > > Key: HADOOP-13881 > URL: https://issues.apache.org/jira/browse/HADOOP-13881 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-13881.01.patch > > > FileSystem#getName, getNamed, getReplication(Path), delete(Path), > getLength(Path), getBlockSize(Path) were re-instated about 6 years ago. They > can be removed in Hadoop 3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13852) hadoop build to allow hadoop version property to be explicitly set
[ https://issues.apache.org/jira/browse/HADOOP-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735024#comment-15735024 ] Steve Loughran commented on HADOOP-13852: - oh, I get it. Sorry. Let me (a) fix it so that the hadoop-project gets it, and submit a patch to YARN JIRA too > hadoop build to allow hadoop version property to be explicitly set > -- > > Key: HADOOP-13852 > URL: https://issues.apache.org/jira/browse/HADOOP-13852 > Project: Hadoop Common > Issue Type: New Feature > Components: build >Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13852-001.patch, HADOOP-13852-002.patch > > > Hive (and transitively) Spark, won't start on Hadoop 3.x as the shim layer > rejects Hadoop v3. As a workaround pending a Hive fix, allow the build to > have the Hadoop version (currently set to pom.version) to be overridden > manually. > This will not affect version names of artifacts, merely the declared Hadoop > version visible in {{VersionInfo.getVersion()}} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13868) New defaults for S3A multi-part configuration
[ https://issues.apache.org/jira/browse/HADOOP-13868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735019#comment-15735019 ] Steve Loughran commented on HADOOP-13868: - 128MB seems a reasonable increase. But could the patched values be of the form 128M, rather than the multiplied out number. That way it's easier for people reading it to see what the actual number means > New defaults for S3A multi-part configuration > - > > Key: HADOOP-13868 > URL: https://issues.apache.org/jira/browse/HADOOP-13868 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.7.0, 3.0.0-alpha1 >Reporter: Sean Mackrory >Assignee: Sean Mackrory > Attachments: HADOOP-13868.001.patch, optimizing-multipart-s3a.sh > > > I've been looking at a big performance regression when writing to S3 from > Spark that appears to have been introduced with HADOOP-12891. > In the Amazon SDK, the default threshold for multi-part copies is 320x the > threshold for multi-part uploads (and the block size is 20x bigger), so I > don't think it's necessarily wise for us to have them be the same. > I did some quick tests and it seems to me the sweet spot when multi-part > copies start being faster is around 512MB. It wasn't as significant, but > using 104857600 (Amazon's default) for the blocksize was also slightly better. > I propose we do the following, although they're independent decisions: > (1) Split the configuration. Ideally, I'd like to have > fs.s3a.multipart.copy.threshold and fs.s3a.multipart.upload.threshold (and > corresponding properties for the block size). But then there's the question > of what to do with the existing fs.s3a.multipart.* properties. Deprecation? > Leave it as a short-hand for configuring both (that's overridden by the more > specific properties?). > (2) Consider increasing the default values. In my tests, 256 MB seemed to be > where multipart uploads came into their own, and 512 MB was where multipart > copies started outperforming the alternative. Would be interested to hear > what other people have seen. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13881) Remove deprecated APIs added in HADOOP-6709
[ https://issues.apache.org/jira/browse/HADOOP-13881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-13881: --- Status: Patch Available (was: Open) > Remove deprecated APIs added in HADOOP-6709 > --- > > Key: HADOOP-13881 > URL: https://issues.apache.org/jira/browse/HADOOP-13881 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-13881.01.patch > > > FileSystem#getName, getNamed, getReplication(Path), delete(Path), > getLength(Path), getBlockSize(Path) were re-instated about 6 years ago. They > can be removed in Hadoop 3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13881) Remove deprecated APIs added in HADOOP-6709
[ https://issues.apache.org/jira/browse/HADOOP-13881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-13881: --- Attachment: HADOOP-13881.01.patch > Remove deprecated APIs added in HADOOP-6709 > --- > > Key: HADOOP-13881 > URL: https://issues.apache.org/jira/browse/HADOOP-13881 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-13881.01.patch > > > FileSystem#getName, getNamed, getReplication(Path), delete(Path), > getLength(Path), getBlockSize(Path) were re-instated about 6 years ago. They > can be removed in Hadoop 3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13879) Remove deprecated FileSystem#getDefault* and getServerDefault methods that don't take a Path argument
[ https://issues.apache.org/jira/browse/HADOOP-13879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735005#comment-15735005 ] Hadoop QA commented on HADOOP-13879: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 9s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 37s{color} | {color:green} root generated 0 new + 711 unchanged - 5 fixed = 711 total (was 716) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 26s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 18s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 24s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 39s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}168m 28s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestErasureCodeBenchmarkThroughput | | Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-13879 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12842502/HADOOP-13879.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 9224d4099962 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7d8e440 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/11230/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results |
[jira] [Updated] (HADOOP-13879) Remove deprecated FileSystem#getDefault* and getServerDefault methods that don't take a Path argument
[ https://issues.apache.org/jira/browse/HADOOP-13879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-13879: --- Hadoop Flags: Incompatible change Release Note: FileSystem#getServerDefaults, #getDefaultReplication, #getDefaultBlockSize methods that don't take a Path argument were removed. Use the methods that take a Path instead. Updated. Thanks [~brahmareddy]. > Remove deprecated FileSystem#getDefault* and getServerDefault methods that > don't take a Path argument > -- > > Key: HADOOP-13879 > URL: https://issues.apache.org/jira/browse/HADOOP-13879 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-13879.01.patch, HADOOP-13879.02.patch, > HADOOP-13879.03.patch, HADOOP-13879.04.patch > > > FileSystem#getServerDefaults(), #getDefaultReplication, #getDefaultBlockSize > were deprecated by HADOOP-8422 and the fix version is 2.0.2-alpha. They can > be removed in Hadoop 3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13879) Remove deprecated FileSystem#getDefault* and getServerDefault methods that don't take a Path argument
[ https://issues.apache.org/jira/browse/HADOOP-13879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15734978#comment-15734978 ] Brahma Reddy Battula commented on HADOOP-13879: --- [~ajisakaa],thanks for updating the patch, At first glance, shouldn't we mark this as incompatiable ,as this removes the public methods.?? > Remove deprecated FileSystem#getDefault* and getServerDefault methods that > don't take a Path argument > -- > > Key: HADOOP-13879 > URL: https://issues.apache.org/jira/browse/HADOOP-13879 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-13879.01.patch, HADOOP-13879.02.patch, > HADOOP-13879.03.patch, HADOOP-13879.04.patch > > > FileSystem#getServerDefaults(), #getDefaultReplication, #getDefaultBlockSize > were deprecated by HADOOP-8422 and the fix version is 2.0.2-alpha. They can > be removed in Hadoop 3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13881) Remove deprecated APIs added in HADOOP-6709
[ https://issues.apache.org/jira/browse/HADOOP-13881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-13881: --- Description: FileSystem#getName, getNamed, getReplication(Path), delete(Path), getLength(Path), getBlockSize(Path) were re-instated about 6 years ago. They can be removed in Hadoop 3. (was: FileSystem#getName, getNamed, getReplication(Path), delete(Path), getFileStatus(), getBlockSize(Path) were re-instated about 6 years ago. They can be removed in Hadoop 3.) > Remove deprecated APIs added in HADOOP-6709 > --- > > Key: HADOOP-13881 > URL: https://issues.apache.org/jira/browse/HADOOP-13881 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > > FileSystem#getName, getNamed, getReplication(Path), delete(Path), > getLength(Path), getBlockSize(Path) were re-instated about 6 years ago. They > can be removed in Hadoop 3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-13881) Remove deprecated APIs added in HADOOP-6709
[ https://issues.apache.org/jira/browse/HADOOP-13881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka reassigned HADOOP-13881: -- Assignee: Akira Ajisaka > Remove deprecated APIs added in HADOOP-6709 > --- > > Key: HADOOP-13881 > URL: https://issues.apache.org/jira/browse/HADOOP-13881 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > > FileSystem#getName, getNamed, getReplication(Path), delete(Path), > getFileStatus(), getBlockSize(Path) were re-instated about 6 years ago. They > can be removed in Hadoop 3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13881) Remove deprecated APIs added in HADOOP-6709
Akira Ajisaka created HADOOP-13881: -- Summary: Remove deprecated APIs added in HADOOP-6709 Key: HADOOP-13881 URL: https://issues.apache.org/jira/browse/HADOOP-13881 Project: Hadoop Common Issue Type: Improvement Reporter: Akira Ajisaka FileSystem#getName, getNamed, getReplication(Path), delete(Path), getFileStatus(), getBlockSize(Path) were re-instated about 6 years ago. They can be removed in Hadoop 3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13879) Remove deprecated FileSystem#getDefault* and getServerDefault methods that don't take a Path argument
[ https://issues.apache.org/jira/browse/HADOOP-13879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15734953#comment-15734953 ] Hadoop QA commented on HADOOP-13879: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 10s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 59s{color} | {color:green} root generated 0 new + 694 unchanged - 22 fixed = 694 total (was 716) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 44s{color} | {color:orange} root: The patch generated 1 new + 560 unchanged - 1 fixed = 561 total (was 561) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 16s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 3s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 7s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 13s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 51s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}161m 49s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.io.TestArrayPrimitiveWritable | | | hadoop.io.TestObjectWritableProtos | | | hadoop.io.TestArrayWritable | | | hadoop.io.TestEnumSetWritable | | | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA | | Timed out junit tests | org.apache.hadoop.metrics2.lib.TestMutableMetrics | | | org.apache.hadoop.io.TestSequenceFile | | | org.apache.hadoop.io.nativeio.TestNativeIO | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-13879 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12842498/HADOOP-13879.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 6285551df913 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Updated] (HADOOP-13879) Remove deprecated FileSystem#getDefault* and getServerDefault methods that don't take a Path argument
[ https://issues.apache.org/jira/browse/HADOOP-13879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-13879: --- Attachment: HADOOP-13879.04.patch 04 patch: * Use {{p}} instead of null in {{FileSystem#getServerDefaults(Path p)}} * Use {{new Path("/")}} instead of null in {{DelegateToFileSystem#getServerDefaults()}}. > Remove deprecated FileSystem#getDefault* and getServerDefault methods that > don't take a Path argument > -- > > Key: HADOOP-13879 > URL: https://issues.apache.org/jira/browse/HADOOP-13879 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-13879.01.patch, HADOOP-13879.02.patch, > HADOOP-13879.03.patch, HADOOP-13879.04.patch > > > FileSystem#getServerDefaults(), #getDefaultReplication, #getDefaultBlockSize > were deprecated by HADOOP-8422 and the fix version is 2.0.2-alpha. They can > be removed in Hadoop 3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13879) Remove deprecated FileSystem#getDefault* and getServerDefault methods that don't take a Path argument
[ https://issues.apache.org/jira/browse/HADOOP-13879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15734910#comment-15734910 ] Akira Ajisaka commented on HADOOP-13879: Thanks [~brahmareddy] for linking the jira. Updated the title and the description. > Remove deprecated FileSystem#getDefault* and getServerDefault methods that > don't take a Path argument > -- > > Key: HADOOP-13879 > URL: https://issues.apache.org/jira/browse/HADOOP-13879 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-13879.01.patch, HADOOP-13879.02.patch, > HADOOP-13879.03.patch > > > FileSystem#getServerDefaults(), #getDefaultReplication, #getDefaultBlockSize > were deprecated by HADOOP-8422 and the fix version is 2.0.2-alpha. They can > be removed in Hadoop 3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13879) Remove deprecated FileSystem#getDefault* and getServerDefault methods that don't take a Path argument
[ https://issues.apache.org/jira/browse/HADOOP-13879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-13879: --- Description: FileSystem#getServerDefaults(), #getDefaultReplication, #getDefaultBlockSize were deprecated by HADOOP-8422 and the fix version is 2.0.2-alpha. They can be removed in Hadoop 3. (was: FileSystem.getServerDefaults() was deprecated by HADOOP-8422 and the fix version is 2.0.2-alpha. The API can be removed in Hadoop 3.) Summary: Remove deprecated FileSystem#getDefault* and getServerDefault methods that don't take a Path argument (was: Remove deprecated FileSystem.getServerDefaults()) > Remove deprecated FileSystem#getDefault* and getServerDefault methods that > don't take a Path argument > -- > > Key: HADOOP-13879 > URL: https://issues.apache.org/jira/browse/HADOOP-13879 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-13879.01.patch, HADOOP-13879.02.patch, > HADOOP-13879.03.patch > > > FileSystem#getServerDefaults(), #getDefaultReplication, #getDefaultBlockSize > were deprecated by HADOOP-8422 and the fix version is 2.0.2-alpha. They can > be removed in Hadoop 3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13879) Remove deprecated FileSystem.getServerDefaults()
[ https://issues.apache.org/jira/browse/HADOOP-13879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-13879: --- Attachment: HADOOP-13879.03.patch > Remove deprecated FileSystem.getServerDefaults() > > > Key: HADOOP-13879 > URL: https://issues.apache.org/jira/browse/HADOOP-13879 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-13879.01.patch, HADOOP-13879.02.patch, > HADOOP-13879.03.patch > > > FileSystem.getServerDefaults() was deprecated by HADOOP-8422 and the fix > version is 2.0.2-alpha. The API can be removed in Hadoop 3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13869) using HADOOP_USER_CLASSPATH_FIRST inconsistently
[ https://issues.apache.org/jira/browse/HADOOP-13869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15734898#comment-15734898 ] Fei Hui commented on HADOOP-13869: -- hi [~aw] Is the patch considered? How do you think? > using HADOOP_USER_CLASSPATH_FIRST inconsistently > > > Key: HADOOP-13869 > URL: https://issues.apache.org/jira/browse/HADOOP-13869 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 3.0.0-alpha2 >Reporter: Fei Hui >Assignee: Fei Hui > Attachments: HADOOP-13869.001.patch > > > I find HADOOP_USER_CLASSPATH_FIRST is used inconsistently. Somewhere set it > true, somewhere set it yes. > I know it doesn't mattter because it affects classpath once > HADOOP_USER_CLASSPATH_FIRST is not empty > BUT Maybe it's better that using HADOOP_USER_CLASSPATH_FIRST uniformly -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13880) Fix dead links in relevant APIs of Job setting
Yiqun Lin created HADOOP-13880: -- Summary: Fix dead links in relevant APIs of Job setting Key: HADOOP-13880 URL: https://issues.apache.org/jira/browse/HADOOP-13880 Project: Hadoop Common Issue Type: Bug Components: documentation Affects Versions: 3.0.0-alpha2 Reporter: Yiqun Lin Assignee: Yiqun Lin Priority: Minor There are some dead links in Job relevant classes. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13879) Remove deprecated FileSystem.getServerDefaults()
[ https://issues.apache.org/jira/browse/HADOOP-13879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15734818#comment-15734818 ] Brahma Reddy Battula commented on HADOOP-13879: --- bq.Rethinking this, it's better to do in a single jira since HADOOP-8422 deprecated getDefaultBlockSize() and getDefaultReplication() as well. I also think same,Linking the deprecated Jira. > Remove deprecated FileSystem.getServerDefaults() > > > Key: HADOOP-13879 > URL: https://issues.apache.org/jira/browse/HADOOP-13879 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-13879.01.patch, HADOOP-13879.02.patch > > > FileSystem.getServerDefaults() was deprecated by HADOOP-8422 and the fix > version is 2.0.2-alpha. The API can be removed in Hadoop 3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13879) Remove deprecated FileSystem.getServerDefaults()
[ https://issues.apache.org/jira/browse/HADOOP-13879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15734800#comment-15734800 ] Akira Ajisaka commented on HADOOP-13879: bq. are you planing for getDefaultBlockSize() and getDefaultReplication() in seperate jira..? Rethinking this, it's better to do in a single jira since HADOOP-8422 deprecated getDefaultBlockSize() and getDefaultReplication() as well. > Remove deprecated FileSystem.getServerDefaults() > > > Key: HADOOP-13879 > URL: https://issues.apache.org/jira/browse/HADOOP-13879 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-13879.01.patch, HADOOP-13879.02.patch > > > FileSystem.getServerDefaults() was deprecated by HADOOP-8422 and the fix > version is 2.0.2-alpha. The API can be removed in Hadoop 3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13879) Remove deprecated FileSystem.getServerDefaults()
[ https://issues.apache.org/jira/browse/HADOOP-13879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15734672#comment-15734672 ] Akira Ajisaka commented on HADOOP-13879: bq. are you planing for getDefaultBlockSize() and getDefaultReplication() in seperate jira..? Yes. bq. UTF8 related changes mistakenly updated aspart of this jira..? Yes, it's my mistake. Uploaded v2 patch. > Remove deprecated FileSystem.getServerDefaults() > > > Key: HADOOP-13879 > URL: https://issues.apache.org/jira/browse/HADOOP-13879 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-13879.01.patch, HADOOP-13879.02.patch > > > FileSystem.getServerDefaults() was deprecated by HADOOP-8422 and the fix > version is 2.0.2-alpha. The API can be removed in Hadoop 3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13879) Remove deprecated FileSystem.getServerDefaults()
[ https://issues.apache.org/jira/browse/HADOOP-13879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-13879: --- Attachment: HADOOP-13879.02.patch > Remove deprecated FileSystem.getServerDefaults() > > > Key: HADOOP-13879 > URL: https://issues.apache.org/jira/browse/HADOOP-13879 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-13879.01.patch, HADOOP-13879.02.patch > > > FileSystem.getServerDefaults() was deprecated by HADOOP-8422 and the fix > version is 2.0.2-alpha. The API can be removed in Hadoop 3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13879) Remove deprecated FileSystem.getServerDefaults()
[ https://issues.apache.org/jira/browse/HADOOP-13879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15734662#comment-15734662 ] Brahma Reddy Battula edited comment on HADOOP-13879 at 12/9/16 8:12 AM: [~ajisakaa] thanks for reporting.. are you planing for {{getDefaultBlockSize()}} and {{getDefaultReplication()}} in seperate jira..? and UTF8 related changes mistakenly updated aspart of this jira..? was (Author: brahmareddy): [~ajisakaa] thanks for reporting.. > Remove deprecated FileSystem.getServerDefaults() > > > Key: HADOOP-13879 > URL: https://issues.apache.org/jira/browse/HADOOP-13879 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-13879.01.patch > > > FileSystem.getServerDefaults() was deprecated by HADOOP-8422 and the fix > version is 2.0.2-alpha. The API can be removed in Hadoop 3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org