[jira] [Updated] (HADOOP-17905) Modify Text.ensureCapacity() to efficiently max out the backing array size
[ https://issues.apache.org/jira/browse/HADOOP-17905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-17905: Fix Version/s: (was: 3.3.2) 3.4.0 > Modify Text.ensureCapacity() to efficiently max out the backing array size > -- > > Key: HADOOP-17905 > URL: https://issues.apache.org/jira/browse/HADOOP-17905 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h 50m > Remaining Estimate: 0h > > This is a continuation of HADOOP-17901. > Right now we use a factor of 1.5x to increase the byte array if it's full. > However, if the size reaches a certain point, the increment is only (current > size + length). This can cause performance issues if the textual data which > we intend to store is beyond this point. > Instead, let's max out the array to the maximum. Based on different sources, > a safe choice seems to be Integer.MAX_VALUE - 8 (see ArrayList, > AbstractCollection, HashTable, etc). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17905) Modify Text.ensureCapacity() to efficiently max out the backing array size
[ https://issues.apache.org/jira/browse/HADOOP-17905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham resolved HADOOP-17905. - Fix Version/s: 3.3.2 Resolution: Fixed > Modify Text.ensureCapacity() to efficiently max out the backing array size > -- > > Key: HADOOP-17905 > URL: https://issues.apache.org/jira/browse/HADOOP-17905 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Labels: pull-request-available > Fix For: 3.3.2 > > Time Spent: 3h 50m > Remaining Estimate: 0h > > This is a continuation of HADOOP-17901. > Right now we use a factor of 1.5x to increase the byte array if it's full. > However, if the size reaches a certain point, the increment is only (current > size + length). This can cause performance issues if the textual data which > we intend to store is beyond this point. > Instead, let's max out the array to the maximum. Based on different sources, > a safe choice seems to be Integer.MAX_VALUE - 8 (see ArrayList, > AbstractCollection, HashTable, etc). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17245) Add RootedOzFS AbstractFileSystem to core-default.xml
[ https://issues.apache.org/jira/browse/HADOOP-17245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-17245: Description: When "ofs" is default, when running mapreduce job, YarnClient fails with below exception. {code:java} Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for scheme: ofs at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176) at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265) at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341) at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338) at java.security.AccessController.doPrivileged(Native Method){code} Observed that o3fs is also not defined, will use this jira to add those too. was: When "ofs" is default, when running mapreduce job, YarnClient fails with below exception. {code:java} Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for scheme: ofs at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176) at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265) at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341) at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338) at java.security.AccessController.doPrivileged(Native Method){code} > Add RootedOzFS AbstractFileSystem to core-default.xml > - > > Key: HADOOP-17245 > URL: https://issues.apache.org/jira/browse/HADOOP-17245 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > When "ofs" is default, when running mapreduce job, YarnClient fails with > below exception. > {code:java} > Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: > fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for > scheme: ofs > at > org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176) > at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265) > at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341) > at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338) > at java.security.AccessController.doPrivileged(Native Method){code} > Observed that o3fs is also not defined, will use this jira to add those too. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17245) Add RootedOzFS AbstractFileSystem to core-default.xml
Bharat Viswanadham created HADOOP-17245: --- Summary: Add RootedOzFS AbstractFileSystem to core-default.xml Key: HADOOP-17245 URL: https://issues.apache.org/jira/browse/HADOOP-17245 Project: Hadoop Common Issue Type: Bug Reporter: Bharat Viswanadham Assignee: Bharat Viswanadham When "ofs" is default, when running mapreduce job, YarnClient fails with below exception. Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for scheme: ofs at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176) at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265) at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341) at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338) at java.security.AccessController.doPrivileged(Native Method) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17245) Add RootedOzFS AbstractFileSystem to core-default.xml
[ https://issues.apache.org/jira/browse/HADOOP-17245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-17245: Description: When "ofs" is default, when running mapreduce job, YarnClient fails with below exception. {code:java} Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for scheme: ofs at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176) at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265) at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341) at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338) at java.security.AccessController.doPrivileged(Native Method){code} was: When "ofs" is default, when running mapreduce job, YarnClient fails with below exception. Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for scheme: ofs at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176) at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265) at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341) at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338) at java.security.AccessController.doPrivileged(Native Method) > Add RootedOzFS AbstractFileSystem to core-default.xml > - > > Key: HADOOP-17245 > URL: https://issues.apache.org/jira/browse/HADOOP-17245 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > > When "ofs" is default, when running mapreduce job, YarnClient fails with > below exception. > {code:java} > Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: > fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for > scheme: ofs > at > org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176) > at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265) > at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341) > at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338) > at java.security.AccessController.doPrivileged(Native Method){code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15457) Add Security-Related HTTP Response Header in WEBUIs.
[ https://issues.apache.org/jira/browse/HADOOP-15457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976818#comment-16976818 ] Bharat Viswanadham commented on HADOOP-15457: - These are added by default X-XSS-Protection: 1; mode=block X-Content-Type-Options: nosniff Users does not need to do anything for above headers. To add customized additional headers, it should be added as below to core-site.xml. hadoop.http.header.http-header http-header-val Now for adding HSTS header, it should be added like below, values need to be customized according to customer security needs. hadoop.http.header.Strict_Transport_Security max-age=7200; includeSubDomains; preload > Add Security-Related HTTP Response Header in WEBUIs. > > > Key: HADOOP-15457 > URL: https://issues.apache.org/jira/browse/HADOOP-15457 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Kanwaljeet Sachdev >Assignee: Kanwaljeet Sachdev >Priority: Major > Labels: security > Fix For: 3.2.0 > > Attachments: HADOOP-15457.001.patch, HADOOP-15457.002.patch, > HADOOP-15457.003.patch, HADOOP-15457.004.patch, HADOOP-15457.005.patch, > YARN-8198.001.patch, YARN-8198.002.patch, YARN-8198.003.patch, > YARN-8198.004.patch, YARN-8198.005.patch > > > As of today, YARN web-ui lacks certain security related http response > headers. We are planning to add few default ones and also add support for > headers to be able to get added via xml config. Planning to make the below > two as default. > * X-XSS-Protection: 1; mode=block > * X-Content-Type-Options: nosniff > > Support for headers via config properties in core-site.xml will be along the > below lines > {code:java} > > hadoop.http.header.Strict_Transport_Security > valHSTSFromXML > {code} > In the above example, valHSTSFromXML is an example value, this should be > configured according to the security requirements. > With this Jira, users can set required headers by prefixing HTTP header with > hadoop.http.header. and configure with the required value in their > core-site.xml. > Example: > > {code:java} > > hadoop.http.header.http-header> > http-header-value > > {code} > > A regex matcher will lift these properties and add into the response header > when Jetty prepares the response. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15457) Add Security-Related HTTP Response Header in WEBUIs.
[ https://issues.apache.org/jira/browse/HADOOP-15457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-15457: Description: As of today, YARN web-ui lacks certain security related http response headers. We are planning to add few default ones and also add support for headers to be able to get added via xml config. Planning to make the below two as default. * X-XSS-Protection: 1; mode=block * X-Content-Type-Options: nosniff Support for headers via config properties in core-site.xml will be along the below lines {code:java} hadoop.http.header.Strict_Transport_Security valHSTSFromXML {code} In the above example, valHSTSFromXML is an example value, this should be configured according to the security requirements. With this Jira, users can set required headers by prefixing HTTP header with hadoop.http.header. and configure with the required value in their core-site.xml. Example: {code:java} hadoop.http.header.http-header> http-header-value {code} A regex matcher will lift these properties and add into the response header when Jetty prepares the response. was: As of today, YARN web-ui lacks certain security related http response headers. We are planning to add few default ones and also add support for headers to be able to get added via xml config. Planning to make the below two as default. * X-XSS-Protection: 1; mode=block * X-Content-Type-Options: nosniff Support for headers via config properties in core-site.xml will be along the below lines {code:java} hadoop.http.header.Strict_Transport_Security valHSTSFromXML {code} In the above example, valHSTSFromXML is an example value, this should be configured according to the security requirements. With this Jira, users can set required headers by prefixing HTTP header with hadoop.http.header.<> and configure with required value in their core-site.xml. Example: {code:java} hadoop.http.header.http-header> http-header-value {code} A regex matcher will lift these properties and add into the response header when Jetty prepares the response. > Add Security-Related HTTP Response Header in WEBUIs. > > > Key: HADOOP-15457 > URL: https://issues.apache.org/jira/browse/HADOOP-15457 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Kanwaljeet Sachdev >Assignee: Kanwaljeet Sachdev >Priority: Major > Labels: security > Fix For: 3.2.0 > > Attachments: HADOOP-15457.001.patch, HADOOP-15457.002.patch, > HADOOP-15457.003.patch, HADOOP-15457.004.patch, HADOOP-15457.005.patch, > YARN-8198.001.patch, YARN-8198.002.patch, YARN-8198.003.patch, > YARN-8198.004.patch, YARN-8198.005.patch > > > As of today, YARN web-ui lacks certain security related http response > headers. We are planning to add few default ones and also add support for > headers to be able to get added via xml config. Planning to make the below > two as default. > * X-XSS-Protection: 1; mode=block > * X-Content-Type-Options: nosniff > > Support for headers via config properties in core-site.xml will be along the > below lines > {code:java} > > hadoop.http.header.Strict_Transport_Security > valHSTSFromXML > {code} > In the above example, valHSTSFromXML is an example value, this should be > configured according to the security requirements. > With this Jira, users can set required headers by prefixing HTTP header with > hadoop.http.header. and configure with the required value in their > core-site.xml. > Example: > > {code:java} > > hadoop.http.header.http-header> > http-header-value > > {code} > > A regex matcher will lift these properties and add into the response header > when Jetty prepares the response. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15457) Add Security-Related HTTP Response Header in WEBUIs.
[ https://issues.apache.org/jira/browse/HADOOP-15457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-15457: Description: As of today, YARN web-ui lacks certain security related http response headers. We are planning to add few default ones and also add support for headers to be able to get added via xml config. Planning to make the below two as default. * X-XSS-Protection: 1; mode=block * X-Content-Type-Options: nosniff Support for headers via config properties in core-site.xml will be along the below lines {code:java} hadoop.http.header.Strict_Transport_Security valHSTSFromXML {code} In the above example, valHSTSFromXML is an example value, this should be configured according to the security requirements. With this Jira, users can set required headers by prefixing HTTP header with hadoop.http.header.<> and configure with required value in their core-site.xml. Example: {code:java} hadoop.http.header.http-header> http-header-value {code} A regex matcher will lift these properties and add into the response header when Jetty prepares the response. was: As of today, YARN web-ui lacks certain security related http response headers. We are planning to add few default ones and also add support for headers to be able to get added via xml config. Planning to make the below two as default. * X-XSS-Protection: 1; mode=block * X-Content-Type-Options: nosniff Support for headers via config properties in core-site.xml will be along the below lines {code:java} hadoop.http.header.Strict_Transport_Security valHSTSFromXML {code} In the above example, valHSTSFromXML is an example value, this should be configured according to the security requirements. With this Jira, users can set required headers by prefixing HTTP header with hadoop.http.header.<> and configure with required value. Example: {code:java} hadoop.http.header.http-header> http-header-value {code} A regex matcher will lift these properties and add into the response header when Jetty prepares the response. > Add Security-Related HTTP Response Header in WEBUIs. > > > Key: HADOOP-15457 > URL: https://issues.apache.org/jira/browse/HADOOP-15457 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Kanwaljeet Sachdev >Assignee: Kanwaljeet Sachdev >Priority: Major > Labels: security > Fix For: 3.2.0 > > Attachments: HADOOP-15457.001.patch, HADOOP-15457.002.patch, > HADOOP-15457.003.patch, HADOOP-15457.004.patch, HADOOP-15457.005.patch, > YARN-8198.001.patch, YARN-8198.002.patch, YARN-8198.003.patch, > YARN-8198.004.patch, YARN-8198.005.patch > > > As of today, YARN web-ui lacks certain security related http response > headers. We are planning to add few default ones and also add support for > headers to be able to get added via xml config. Planning to make the below > two as default. > * X-XSS-Protection: 1; mode=block > * X-Content-Type-Options: nosniff > > Support for headers via config properties in core-site.xml will be along the > below lines > {code:java} > > hadoop.http.header.Strict_Transport_Security > valHSTSFromXML > {code} > In the above example, valHSTSFromXML is an example value, this should be > configured according to the security requirements. > With this Jira, users can set required headers by prefixing HTTP header with > hadoop.http.header.<> and configure with required value in their > core-site.xml. > Example: > > {code:java} > > hadoop.http.header.http-header> > http-header-value > > {code} > > A regex matcher will lift these properties and add into the response header > when Jetty prepares the response. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15457) Add Security-Related HTTP Response Header in WEBUIs.
[ https://issues.apache.org/jira/browse/HADOOP-15457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-15457: Description: As of today, YARN web-ui lacks certain security related http response headers. We are planning to add few default ones and also add support for headers to be able to get added via xml config. Planning to make the below two as default. * X-XSS-Protection: 1; mode=block * X-Content-Type-Options: nosniff Support for headers via config properties in core-site.xml will be along the below lines {code:java} hadoop.http.header.Strict_Transport_Security valHSTSFromXML {code} In the above example, valHSTSFromXML is an example value, this should be configured according to the security requirements. With this Jira, users can set required headers by prefixing HTTP header with hadoop.http.header.<> and configure with required value. Example: {code:java} hadoop.http.header.http-header> http-header-value {code} A regex matcher will lift these properties and add into the response header when Jetty prepares the response. was: As of today, YARN web-ui lacks certain security related http response headers. We are planning to add few default ones and also add support for headers to be able to get added via xml config. Planning to make the below two as default. * X-XSS-Protection: 1; mode=block * X-Content-Type-Options: nosniff Support for headers via config properties in core-site.xml will be along the below lines {code:java} hadoop.http.header.Strict_Transport_Security valHSTSFromXML {code} A regex matcher will lift these properties and add into the response header when Jetty prepares the response. > Add Security-Related HTTP Response Header in WEBUIs. > > > Key: HADOOP-15457 > URL: https://issues.apache.org/jira/browse/HADOOP-15457 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Kanwaljeet Sachdev >Assignee: Kanwaljeet Sachdev >Priority: Major > Labels: security > Fix For: 3.2.0 > > Attachments: HADOOP-15457.001.patch, HADOOP-15457.002.patch, > HADOOP-15457.003.patch, HADOOP-15457.004.patch, HADOOP-15457.005.patch, > YARN-8198.001.patch, YARN-8198.002.patch, YARN-8198.003.patch, > YARN-8198.004.patch, YARN-8198.005.patch > > > As of today, YARN web-ui lacks certain security related http response > headers. We are planning to add few default ones and also add support for > headers to be able to get added via xml config. Planning to make the below > two as default. > * X-XSS-Protection: 1; mode=block > * X-Content-Type-Options: nosniff > > Support for headers via config properties in core-site.xml will be along the > below lines > {code:java} > > hadoop.http.header.Strict_Transport_Security > valHSTSFromXML > {code} > In the above example, valHSTSFromXML is an example value, this should be > configured according to the security requirements. > With this Jira, users can set required headers by prefixing HTTP header with > hadoop.http.header.<> and configure with required value. > Example: > > {code:java} > > hadoop.http.header.http-header> > http-header-value > > {code} > > A regex matcher will lift these properties and add into the response header > when Jetty prepares the response. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15457) Add Security-Related HTTP Response Header in WEBUIs.
[ https://issues.apache.org/jira/browse/HADOOP-15457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16975467#comment-16975467 ] Bharat Viswanadham commented on HADOOP-15457: - Hi [~kanwaljeets] [~rkanter] Just want to understand this, in Jira description for other http headers it is said "add support for headers to be able to get added via xml config" But in the code, I see we have a regex and reading all the values matching with regex from the configuration. Like for example to set HSTS header, I think we should be set as {code:java} hadoop.http.header.Strict_Transport_Security max-age=7200; includeSubDomains; preload . {code} So do you mean here reading from xml config means, reading from core-site.xml, and gave some sample value for HSTS header? hadoop.http.header.Strict_Transport_Security valHSTSFromXML > Add Security-Related HTTP Response Header in WEBUIs. > > > Key: HADOOP-15457 > URL: https://issues.apache.org/jira/browse/HADOOP-15457 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Kanwaljeet Sachdev >Assignee: Kanwaljeet Sachdev >Priority: Major > Labels: security > Fix For: 3.2.0 > > Attachments: HADOOP-15457.001.patch, HADOOP-15457.002.patch, > HADOOP-15457.003.patch, HADOOP-15457.004.patch, HADOOP-15457.005.patch, > YARN-8198.001.patch, YARN-8198.002.patch, YARN-8198.003.patch, > YARN-8198.004.patch, YARN-8198.005.patch > > > As of today, YARN web-ui lacks certain security related http response > headers. We are planning to add few default ones and also add support for > headers to be able to get added via xml config. Planning to make the below > two as default. > * X-XSS-Protection: 1; mode=block > * X-Content-Type-Options: nosniff > > Support for headers via config properties in core-site.xml will be along the > below lines > {code:java} > > hadoop.http.header.Strict_Transport_Security > valHSTSFromXML > {code} > > A regex matcher will lift these properties and add into the response header > when Jetty prepares the response. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-11245) Update NFS gateway to use Netty4
[ https://issues.apache.org/jira/browse/HADOOP-11245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham reassigned HADOOP-11245: --- Assignee: (was: Bharat Viswanadham) > Update NFS gateway to use Netty4 > > > Key: HADOOP-11245 > URL: https://issues.apache.org/jira/browse/HADOOP-11245 > Project: Hadoop Common > Issue Type: Sub-task > Components: nfs >Reporter: Brandon Li >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4
[ https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham reassigned HADOOP-15327: --- Assignee: (was: Bharat Viswanadham) > Upgrade MR ShuffleHandler to use Netty4 > --- > > Key: HADOOP-15327 > URL: https://issues.apache.org/jira/browse/HADOOP-15327 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Priority: Major > > This way, we can remove the dependencies on the netty3 (jboss.netty) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-16487) Update jackson-databind to 2.9.9.2
[ https://issues.apache.org/jira/browse/HADOOP-16487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899328#comment-16899328 ] Bharat Viswanadham edited comment on HADOOP-16487 at 8/3/19 2:18 AM: - +1 pending CI. was (Author: bharatviswa): +1. > Update jackson-databind to 2.9.9.2 > -- > > Key: HADOOP-16487 > URL: https://issues.apache.org/jira/browse/HADOOP-16487 > Project: Hadoop Common > Issue Type: Bug >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Critical > Attachments: HADOOP-16487.001.patch > > > Another CVE in jackson-databind: > https://nvd.nist.gov/vuln/detail/CVE-2019-14379 > jackson-databind 2.9.9.2 is available: > https://mvnrepository.com/artifact/com.fasterxml.jackson.core/jackson-databind > Side note: Here's a discussion jira on whether to remove jackson-databind due > to the increasing number of CVEs in this dependency recently: HADOOP-16485 -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16487) Update jackson-databind to 2.9.9.2
[ https://issues.apache.org/jira/browse/HADOOP-16487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899328#comment-16899328 ] Bharat Viswanadham commented on HADOOP-16487: - +1. > Update jackson-databind to 2.9.9.2 > -- > > Key: HADOOP-16487 > URL: https://issues.apache.org/jira/browse/HADOOP-16487 > Project: Hadoop Common > Issue Type: Bug >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Critical > Attachments: HADOOP-16487.001.patch > > > Another CVE in jackson-databind: > https://nvd.nist.gov/vuln/detail/CVE-2019-14379 > jackson-databind 2.9.9.2 is available: > https://mvnrepository.com/artifact/com.fasterxml.jackson.core/jackson-databind > Side note: Here's a discussion jira on whether to remove jackson-databind due > to the increasing number of CVEs in this dependency recently: HADOOP-16485 -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16373) Fix typo in FileSystemShell#test documentation
[ https://issues.apache.org/jira/browse/HADOOP-16373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-16373: Fix Version/s: (was: 0.5.0) 3.3.0 > Fix typo in FileSystemShell#test documentation > -- > > Key: HADOOP-16373 > URL: https://issues.apache.org/jira/browse/HADOOP-16373 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 2.7.1, 3.0.0, 3.2.0, 2.9.2, 3.1.2 >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Trivial > Fix For: 3.3.0 > > > Typo is describing option -d > https://hadoop.apache.org/docs/r3.1.2/hadoop-project-dist/hadoop-common/FileSystemShell.html#test > {code:java} > test > Usage: hadoop fs -test -[defsz] URI > Options: > -d: f the path is a directory, return 0. > -e: if the path exists, return 0. > -f: if the path is a file, return 0. > -s: if the path is not empty, return 0. > -z: if the file is zero length, return 0. > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16373) Fix typo in FileSystemShell#test documentation
[ https://issues.apache.org/jira/browse/HADOOP-16373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham resolved HADOOP-16373. - Resolution: Fixed Fix Version/s: 0.5.0 > Fix typo in FileSystemShell#test documentation > -- > > Key: HADOOP-16373 > URL: https://issues.apache.org/jira/browse/HADOOP-16373 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 2.7.1, 3.0.0, 3.2.0, 2.9.2, 3.1.2 >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Trivial > Fix For: 0.5.0 > > > Typo is describing option -d > https://hadoop.apache.org/docs/r3.1.2/hadoop-project-dist/hadoop-common/FileSystemShell.html#test > {code:java} > test > Usage: hadoop fs -test -[defsz] URI > Options: > -d: f the path is a directory, return 0. > -e: if the path exists, return 0. > -f: if the path is a file, return 0. > -s: if the path is not empty, return 0. > -z: if the file is zero length, return 0. > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16372) Fix typo in DFSUtil getHttpPolicy method
Bharat Viswanadham created HADOOP-16372: --- Summary: Fix typo in DFSUtil getHttpPolicy method Key: HADOOP-16372 URL: https://issues.apache.org/jira/browse/HADOOP-16372 Project: Hadoop Common Issue Type: Bug Reporter: Bharat Viswanadham [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java#L1479] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-16248) Fix MutableQuantiles memory leak
[ https://issues.apache.org/jira/browse/HADOOP-16248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840893#comment-16840893 ] Bharat Viswanadham edited comment on HADOOP-16248 at 5/16/19 1:06 AM: -- Hi [~adaboville] You can create a patch with below format. <>.<>.patch. In your case, it will be HADOOP-16248.00.pathc was (Author: bharatviswa): Hi [~adaboville] You can create a patch with below format. <>.<>.patch > Fix MutableQuantiles memory leak > > > Key: HADOOP-16248 > URL: https://issues.apache.org/jira/browse/HADOOP-16248 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.9.2 >Reporter: Alexis Daboville >Priority: Major > Attachments: mutable-quantiles-leak.png, mutable-quantiles.patch > > > In some circumstances (high GC, high CPU usage, creating lots of > S3AFileSystem) it is possible for MutableQuantiles::scheduler [1] to fall > behind processing tasks that are submitted to it; because tasks are > submitted on a regular schedule, the unbounded queue backing the > {{ExecutorService}} might grow to several gigs [2]. By using > {{scheduleWithFixedDelay}} instead, we ensure that under pressure this leak > won't > happen. In order to mitigate the growth, a simple fix [3] is proposed, > simply replacing {{scheduler.scheduleAtFixedRate}} by > {{scheduler.scheduleWithFixedDelay}}. > [1] it is single threaded and shared across all instances of > {{MutableQuantiles}}: > [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableQuantiles.java#L66-L68] > [2] see attached mutable-quantiles-leak.png. > [3] mutable-quantiles.patch -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16248) Fix MutableQuantiles memory leak
[ https://issues.apache.org/jira/browse/HADOOP-16248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840893#comment-16840893 ] Bharat Viswanadham commented on HADOOP-16248: - Hi [~adaboville] You can create a patch with below format. <>.<>.patch > Fix MutableQuantiles memory leak > > > Key: HADOOP-16248 > URL: https://issues.apache.org/jira/browse/HADOOP-16248 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.9.2 >Reporter: Alexis Daboville >Priority: Major > Attachments: mutable-quantiles-leak.png, mutable-quantiles.patch > > > In some circumstances (high GC, high CPU usage, creating lots of > S3AFileSystem) it is possible for MutableQuantiles::scheduler [1] to fall > behind processing tasks that are submitted to it; because tasks are > submitted on a regular schedule, the unbounded queue backing the > {{ExecutorService}} might grow to several gigs [2]. By using > {{scheduleWithFixedDelay}} instead, we ensure that under pressure this leak > won't > happen. In order to mitigate the growth, a simple fix [3] is proposed, > simply replacing {{scheduler.scheduleAtFixedRate}} by > {{scheduler.scheduleWithFixedDelay}}. > [1] it is single threaded and shared across all instances of > {{MutableQuantiles}}: > [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableQuantiles.java#L66-L68] > [2] see attached mutable-quantiles-leak.png. > [3] mutable-quantiles.patch -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16247) NPE in FsUrlConnection
[ https://issues.apache.org/jira/browse/HADOOP-16247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-16247: Fix Version/s: 3.2.1 > NPE in FsUrlConnection > -- > > Key: HADOOP-16247 > URL: https://issues.apache.org/jira/browse/HADOOP-16247 > Project: Hadoop Common > Issue Type: Bug > Components: hdfs-client >Affects Versions: 3.1.2 >Reporter: Karthik Palanisamy >Assignee: Karthik Palanisamy >Priority: Major > Fix For: 3.3.0, 3.2.1, 3.1.3 > > Attachments: HADOOP-16247-001.patch, HADOOP-16247-002.patch, > HADOOP-16247-003.patch, HADOOP-16247-004.patch, HADOOP-16247-005.patch, > HADOOP-16247-006.patch, HADOOP-16247-007.patch, HADOOP-16247-008.patch, > HADOOP-16247-009.patch > > > FsUrlConnection doesn't handle relativePath correctly after the change > [HADOOP-15217|https://issues.apache.org/jira/browse/HADOOP-15217] > {code} > Exception in thread "main" java.lang.NullPointerException > at org.apache.hadoop.fs.Path.isUriPathAbsolute(Path.java:385) > at org.apache.hadoop.fs.Path.isAbsolute(Path.java:395) > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:87) > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:636) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:930) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631) > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454) > at > org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:146) > at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:347) > at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:899) > at org.apache.hadoop.fs.FsUrlConnection.connect(FsUrlConnection.java:62) > at > org.apache.hadoop.fs.FsUrlConnection.getInputStream(FsUrlConnection.java:71) > at java.net.URL.openStream(URL.java:1045) > at UrlProblem.testRelativePath(UrlProblem.java:33) > at UrlProblem.main(UrlProblem.java:19) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-16247) NPE in FsUrlConnection
[ https://issues.apache.org/jira/browse/HADOOP-16247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840889#comment-16840889 ] Bharat Viswanadham edited comment on HADOOP-16247 at 5/16/19 1:03 AM: -- Thank You [~kpalanisamy] for the contribution, [~jojochuang] and [~daryn] for the review. I have committed this to the trunk, branch-3.1, and branch-3.2. was (Author: bharatviswa): Thank You [~kpalanisamy] for the contribution, [~jojochuang] and [~daryn] for the review. I have committed this to trunk and branch-3.1. > NPE in FsUrlConnection > -- > > Key: HADOOP-16247 > URL: https://issues.apache.org/jira/browse/HADOOP-16247 > Project: Hadoop Common > Issue Type: Bug > Components: hdfs-client >Affects Versions: 3.1.2 >Reporter: Karthik Palanisamy >Assignee: Karthik Palanisamy >Priority: Major > Fix For: 3.3.0, 3.2.1, 3.1.3 > > Attachments: HADOOP-16247-001.patch, HADOOP-16247-002.patch, > HADOOP-16247-003.patch, HADOOP-16247-004.patch, HADOOP-16247-005.patch, > HADOOP-16247-006.patch, HADOOP-16247-007.patch, HADOOP-16247-008.patch, > HADOOP-16247-009.patch > > > FsUrlConnection doesn't handle relativePath correctly after the change > [HADOOP-15217|https://issues.apache.org/jira/browse/HADOOP-15217] > {code} > Exception in thread "main" java.lang.NullPointerException > at org.apache.hadoop.fs.Path.isUriPathAbsolute(Path.java:385) > at org.apache.hadoop.fs.Path.isAbsolute(Path.java:395) > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:87) > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:636) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:930) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631) > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454) > at > org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:146) > at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:347) > at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:899) > at org.apache.hadoop.fs.FsUrlConnection.connect(FsUrlConnection.java:62) > at > org.apache.hadoop.fs.FsUrlConnection.getInputStream(FsUrlConnection.java:71) > at java.net.URL.openStream(URL.java:1045) > at UrlProblem.testRelativePath(UrlProblem.java:33) > at UrlProblem.main(UrlProblem.java:19) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-16247) NPE in FsUrlConnection
[ https://issues.apache.org/jira/browse/HADOOP-16247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840889#comment-16840889 ] Bharat Viswanadham edited comment on HADOOP-16247 at 5/16/19 12:55 AM: --- Thank You [~kpalanisamy] for the contribution, [~jojochuang] and [~daryn] for the review. I have committed this to trunk and branch-3.1. was (Author: bharatviswa): Thank You [~kpalanisamy] for the contribution, [~jojochuang] and [~daryn] for the review. I have committed this to trunk. > NPE in FsUrlConnection > -- > > Key: HADOOP-16247 > URL: https://issues.apache.org/jira/browse/HADOOP-16247 > Project: Hadoop Common > Issue Type: Bug > Components: hdfs-client >Affects Versions: 3.1.2 >Reporter: Karthik Palanisamy >Assignee: Karthik Palanisamy >Priority: Major > Fix For: 3.3.0, 3.1.3 > > Attachments: HADOOP-16247-001.patch, HADOOP-16247-002.patch, > HADOOP-16247-003.patch, HADOOP-16247-004.patch, HADOOP-16247-005.patch, > HADOOP-16247-006.patch, HADOOP-16247-007.patch, HADOOP-16247-008.patch, > HADOOP-16247-009.patch > > > FsUrlConnection doesn't handle relativePath correctly after the change > [HADOOP-15217|https://issues.apache.org/jira/browse/HADOOP-15217] > {code} > Exception in thread "main" java.lang.NullPointerException > at org.apache.hadoop.fs.Path.isUriPathAbsolute(Path.java:385) > at org.apache.hadoop.fs.Path.isAbsolute(Path.java:395) > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:87) > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:636) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:930) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631) > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454) > at > org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:146) > at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:347) > at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:899) > at org.apache.hadoop.fs.FsUrlConnection.connect(FsUrlConnection.java:62) > at > org.apache.hadoop.fs.FsUrlConnection.getInputStream(FsUrlConnection.java:71) > at java.net.URL.openStream(URL.java:1045) > at UrlProblem.testRelativePath(UrlProblem.java:33) > at UrlProblem.main(UrlProblem.java:19) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16247) NPE in FsUrlConnection
[ https://issues.apache.org/jira/browse/HADOOP-16247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-16247: Fix Version/s: 3.1.3 > NPE in FsUrlConnection > -- > > Key: HADOOP-16247 > URL: https://issues.apache.org/jira/browse/HADOOP-16247 > Project: Hadoop Common > Issue Type: Bug > Components: hdfs-client >Affects Versions: 3.1.2 >Reporter: Karthik Palanisamy >Assignee: Karthik Palanisamy >Priority: Major > Fix For: 3.3.0, 3.1.3 > > Attachments: HADOOP-16247-001.patch, HADOOP-16247-002.patch, > HADOOP-16247-003.patch, HADOOP-16247-004.patch, HADOOP-16247-005.patch, > HADOOP-16247-006.patch, HADOOP-16247-007.patch, HADOOP-16247-008.patch, > HADOOP-16247-009.patch > > > FsUrlConnection doesn't handle relativePath correctly after the change > [HADOOP-15217|https://issues.apache.org/jira/browse/HADOOP-15217] > {code} > Exception in thread "main" java.lang.NullPointerException > at org.apache.hadoop.fs.Path.isUriPathAbsolute(Path.java:385) > at org.apache.hadoop.fs.Path.isAbsolute(Path.java:395) > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:87) > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:636) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:930) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631) > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454) > at > org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:146) > at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:347) > at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:899) > at org.apache.hadoop.fs.FsUrlConnection.connect(FsUrlConnection.java:62) > at > org.apache.hadoop.fs.FsUrlConnection.getInputStream(FsUrlConnection.java:71) > at java.net.URL.openStream(URL.java:1045) > at UrlProblem.testRelativePath(UrlProblem.java:33) > at UrlProblem.main(UrlProblem.java:19) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16247) NPE in FsUrlConnection
[ https://issues.apache.org/jira/browse/HADOOP-16247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-16247: Resolution: Fixed Fix Version/s: 3.3.0 Status: Resolved (was: Patch Available) Thank You [~kpalanisamy] for the contribution, [~jojochuang] and [~daryn] for the review. I have committed this to trunk. > NPE in FsUrlConnection > -- > > Key: HADOOP-16247 > URL: https://issues.apache.org/jira/browse/HADOOP-16247 > Project: Hadoop Common > Issue Type: Bug > Components: hdfs-client >Affects Versions: 3.1.2 >Reporter: Karthik Palanisamy >Assignee: Karthik Palanisamy >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-16247-001.patch, HADOOP-16247-002.patch, > HADOOP-16247-003.patch, HADOOP-16247-004.patch, HADOOP-16247-005.patch, > HADOOP-16247-006.patch, HADOOP-16247-007.patch, HADOOP-16247-008.patch, > HADOOP-16247-009.patch > > > FsUrlConnection doesn't handle relativePath correctly after the change > [HADOOP-15217|https://issues.apache.org/jira/browse/HADOOP-15217] > {code} > Exception in thread "main" java.lang.NullPointerException > at org.apache.hadoop.fs.Path.isUriPathAbsolute(Path.java:385) > at org.apache.hadoop.fs.Path.isAbsolute(Path.java:395) > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:87) > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:636) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:930) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631) > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454) > at > org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:146) > at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:347) > at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:899) > at org.apache.hadoop.fs.FsUrlConnection.connect(FsUrlConnection.java:62) > at > org.apache.hadoop.fs.FsUrlConnection.getInputStream(FsUrlConnection.java:71) > at java.net.URL.openStream(URL.java:1045) > at UrlProblem.testRelativePath(UrlProblem.java:33) > at UrlProblem.main(UrlProblem.java:19) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16247) NPE in FsUrlConnection
[ https://issues.apache.org/jira/browse/HADOOP-16247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840882#comment-16840882 ] Bharat Viswanadham commented on HADOOP-16247: - As no further comments, I will commit this shortly. > NPE in FsUrlConnection > -- > > Key: HADOOP-16247 > URL: https://issues.apache.org/jira/browse/HADOOP-16247 > Project: Hadoop Common > Issue Type: Bug > Components: hdfs-client >Affects Versions: 3.1.2 >Reporter: Karthik Palanisamy >Assignee: Karthik Palanisamy >Priority: Major > Attachments: HADOOP-16247-001.patch, HADOOP-16247-002.patch, > HADOOP-16247-003.patch, HADOOP-16247-004.patch, HADOOP-16247-005.patch, > HADOOP-16247-006.patch, HADOOP-16247-007.patch, HADOOP-16247-008.patch, > HADOOP-16247-009.patch > > > FsUrlConnection doesn't handle relativePath correctly after the change > [HADOOP-15217|https://issues.apache.org/jira/browse/HADOOP-15217] > {code} > Exception in thread "main" java.lang.NullPointerException > at org.apache.hadoop.fs.Path.isUriPathAbsolute(Path.java:385) > at org.apache.hadoop.fs.Path.isAbsolute(Path.java:395) > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:87) > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:636) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:930) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631) > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454) > at > org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:146) > at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:347) > at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:899) > at org.apache.hadoop.fs.FsUrlConnection.connect(FsUrlConnection.java:62) > at > org.apache.hadoop.fs.FsUrlConnection.getInputStream(FsUrlConnection.java:71) > at java.net.URL.openStream(URL.java:1045) > at UrlProblem.testRelativePath(UrlProblem.java:33) > at UrlProblem.main(UrlProblem.java:19) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-16247) NPE in FsUrlConnection
[ https://issues.apache.org/jira/browse/HADOOP-16247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836875#comment-16836875 ] Bharat Viswanadham edited comment on HADOOP-16247 at 5/10/19 4:56 AM: -- +1 LGTM, [~kpalanisamy] can you fix checkstyle issues reported by jenkins, it is strange it has given +1, but it has checkstyle errors. I Will wait for a couple of days if no more comments I will commit this patch, as [~daryn] and [~jojochuang] have already reviewed and had some comments. was (Author: bharatviswa): +1 LGTM. I Will wait for a couple of days if no more comments I will commit this patch, as [~daryn] and [~jojochuang] have already reviewed and had some comments. > NPE in FsUrlConnection > -- > > Key: HADOOP-16247 > URL: https://issues.apache.org/jira/browse/HADOOP-16247 > Project: Hadoop Common > Issue Type: Bug > Components: hdfs-client >Affects Versions: 3.1.2 >Reporter: Karthik Palanisamy >Assignee: Karthik Palanisamy >Priority: Major > Attachments: HADOOP-16247-001.patch, HADOOP-16247-002.patch, > HADOOP-16247-003.patch, HADOOP-16247-004.patch, HADOOP-16247-005.patch, > HADOOP-16247-006.patch, HADOOP-16247-007.patch, HADOOP-16247-008.patch > > > FsUrlConnection doesn't handle relativePath correctly after the change > [HADOOP-15217|https://issues.apache.org/jira/browse/HADOOP-15217] > {code} > Exception in thread "main" java.lang.NullPointerException > at org.apache.hadoop.fs.Path.isUriPathAbsolute(Path.java:385) > at org.apache.hadoop.fs.Path.isAbsolute(Path.java:395) > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:87) > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:636) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:930) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631) > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454) > at > org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:146) > at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:347) > at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:899) > at org.apache.hadoop.fs.FsUrlConnection.connect(FsUrlConnection.java:62) > at > org.apache.hadoop.fs.FsUrlConnection.getInputStream(FsUrlConnection.java:71) > at java.net.URL.openStream(URL.java:1045) > at UrlProblem.testRelativePath(UrlProblem.java:33) > at UrlProblem.main(UrlProblem.java:19) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16247) NPE in FsUrlConnection
[ https://issues.apache.org/jira/browse/HADOOP-16247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836875#comment-16836875 ] Bharat Viswanadham commented on HADOOP-16247: - +1 LGTM. I Will wait for a couple of days if no more comments I will commit this patch, as [~daryn] and [~jojochuang] have already reviewed and some comment. > NPE in FsUrlConnection > -- > > Key: HADOOP-16247 > URL: https://issues.apache.org/jira/browse/HADOOP-16247 > Project: Hadoop Common > Issue Type: Bug > Components: hdfs-client >Affects Versions: 3.1.2 >Reporter: Karthik Palanisamy >Assignee: Karthik Palanisamy >Priority: Major > Attachments: HADOOP-16247-001.patch, HADOOP-16247-002.patch, > HADOOP-16247-003.patch, HADOOP-16247-004.patch, HADOOP-16247-005.patch, > HADOOP-16247-006.patch, HADOOP-16247-007.patch, HADOOP-16247-008.patch > > > FsUrlConnection doesn't handle relativePath correctly after the change > [HADOOP-15217|https://issues.apache.org/jira/browse/HADOOP-15217] > {code} > Exception in thread "main" java.lang.NullPointerException > at org.apache.hadoop.fs.Path.isUriPathAbsolute(Path.java:385) > at org.apache.hadoop.fs.Path.isAbsolute(Path.java:395) > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:87) > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:636) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:930) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631) > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454) > at > org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:146) > at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:347) > at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:899) > at org.apache.hadoop.fs.FsUrlConnection.connect(FsUrlConnection.java:62) > at > org.apache.hadoop.fs.FsUrlConnection.getInputStream(FsUrlConnection.java:71) > at java.net.URL.openStream(URL.java:1045) > at UrlProblem.testRelativePath(UrlProblem.java:33) > at UrlProblem.main(UrlProblem.java:19) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-16247) NPE in FsUrlConnection
[ https://issues.apache.org/jira/browse/HADOOP-16247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836875#comment-16836875 ] Bharat Viswanadham edited comment on HADOOP-16247 at 5/10/19 4:38 AM: -- +1 LGTM. I Will wait for a couple of days if no more comments I will commit this patch, as [~daryn] and [~jojochuang] have already reviewed and had some comments. was (Author: bharatviswa): +1 LGTM. I Will wait for a couple of days if no more comments I will commit this patch, as [~daryn] and [~jojochuang] have already reviewed and some comment. > NPE in FsUrlConnection > -- > > Key: HADOOP-16247 > URL: https://issues.apache.org/jira/browse/HADOOP-16247 > Project: Hadoop Common > Issue Type: Bug > Components: hdfs-client >Affects Versions: 3.1.2 >Reporter: Karthik Palanisamy >Assignee: Karthik Palanisamy >Priority: Major > Attachments: HADOOP-16247-001.patch, HADOOP-16247-002.patch, > HADOOP-16247-003.patch, HADOOP-16247-004.patch, HADOOP-16247-005.patch, > HADOOP-16247-006.patch, HADOOP-16247-007.patch, HADOOP-16247-008.patch > > > FsUrlConnection doesn't handle relativePath correctly after the change > [HADOOP-15217|https://issues.apache.org/jira/browse/HADOOP-15217] > {code} > Exception in thread "main" java.lang.NullPointerException > at org.apache.hadoop.fs.Path.isUriPathAbsolute(Path.java:385) > at org.apache.hadoop.fs.Path.isAbsolute(Path.java:395) > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:87) > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:636) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:930) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631) > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454) > at > org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:146) > at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:347) > at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:899) > at org.apache.hadoop.fs.FsUrlConnection.connect(FsUrlConnection.java:62) > at > org.apache.hadoop.fs.FsUrlConnection.getInputStream(FsUrlConnection.java:71) > at java.net.URL.openStream(URL.java:1045) > at UrlProblem.testRelativePath(UrlProblem.java:33) > at UrlProblem.main(UrlProblem.java:19) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-16247) NPE in FsUrlConnection
[ https://issues.apache.org/jira/browse/HADOOP-16247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836742#comment-16836742 ] Bharat Viswanadham edited comment on HADOOP-16247 at 5/9/19 10:19 PM: -- Patch over all LGTM. Could you add some explanation for the change, it will be easy when someone is reading the code for the first time. Once this is done, I am +1 with the change. Thank You [~kpalanisamy] for the fix. (I think now with this patch, it will fix the file scheme with a relative path, and for rest of the URI's it uses the old code, so it does not break anything. was (Author: bharatviswa): +1. Could you add some explanation for the change, it will be easy when someone is reading the code. Once this is done, I am +1 with the change. Thank You [~kpalanisamy] for the fix. (I think now with this patch, it will fix the file scheme with a relative path, and for rest of the URI's it uses the old code, so it does not break anything. > NPE in FsUrlConnection > -- > > Key: HADOOP-16247 > URL: https://issues.apache.org/jira/browse/HADOOP-16247 > Project: Hadoop Common > Issue Type: Bug > Components: hdfs-client >Affects Versions: 3.1.2 >Reporter: Karthik Palanisamy >Assignee: Karthik Palanisamy >Priority: Major > Attachments: HADOOP-16247-001.patch, HADOOP-16247-002.patch, > HADOOP-16247-003.patch, HADOOP-16247-004.patch, HADOOP-16247-005.patch, > HADOOP-16247-006.patch, HADOOP-16247-007.patch > > > FsUrlConnection doesn't handle relativePath correctly after the change > [HADOOP-15217|https://issues.apache.org/jira/browse/HADOOP-15217] > {code} > Exception in thread "main" java.lang.NullPointerException > at org.apache.hadoop.fs.Path.isUriPathAbsolute(Path.java:385) > at org.apache.hadoop.fs.Path.isAbsolute(Path.java:395) > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:87) > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:636) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:930) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631) > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454) > at > org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:146) > at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:347) > at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:899) > at org.apache.hadoop.fs.FsUrlConnection.connect(FsUrlConnection.java:62) > at > org.apache.hadoop.fs.FsUrlConnection.getInputStream(FsUrlConnection.java:71) > at java.net.URL.openStream(URL.java:1045) > at UrlProblem.testRelativePath(UrlProblem.java:33) > at UrlProblem.main(UrlProblem.java:19) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16247) NPE in FsUrlConnection
[ https://issues.apache.org/jira/browse/HADOOP-16247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836742#comment-16836742 ] Bharat Viswanadham commented on HADOOP-16247: - +1. Could you add some explanation for the change, it will be easy when someone is reading the code. Once this is done, I am +1 with the change. Thank You [~kpalanisamy] for the fix. (I think now with this patch, it will fix the file scheme with a relative path, and for rest of the URI's it uses the old code, so it does not break anything. > NPE in FsUrlConnection > -- > > Key: HADOOP-16247 > URL: https://issues.apache.org/jira/browse/HADOOP-16247 > Project: Hadoop Common > Issue Type: Bug > Components: hdfs-client >Affects Versions: 3.1.2 >Reporter: Karthik Palanisamy >Assignee: Karthik Palanisamy >Priority: Major > Attachments: HADOOP-16247-001.patch, HADOOP-16247-002.patch, > HADOOP-16247-003.patch, HADOOP-16247-004.patch, HADOOP-16247-005.patch, > HADOOP-16247-006.patch, HADOOP-16247-007.patch > > > FsUrlConnection doesn't handle relativePath correctly after the change > [HADOOP-15217|https://issues.apache.org/jira/browse/HADOOP-15217] > {code} > Exception in thread "main" java.lang.NullPointerException > at org.apache.hadoop.fs.Path.isUriPathAbsolute(Path.java:385) > at org.apache.hadoop.fs.Path.isAbsolute(Path.java:395) > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:87) > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:636) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:930) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631) > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454) > at > org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:146) > at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:347) > at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:899) > at org.apache.hadoop.fs.FsUrlConnection.connect(FsUrlConnection.java:62) > at > org.apache.hadoop.fs.FsUrlConnection.getInputStream(FsUrlConnection.java:71) > at java.net.URL.openStream(URL.java:1045) > at UrlProblem.testRelativePath(UrlProblem.java:33) > at UrlProblem.main(UrlProblem.java:19) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16302) Fix typo on Hadoop Site Help dropdown menu
[ https://issues.apache.org/jira/browse/HADOOP-16302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham resolved HADOOP-16302. - Resolution: Fixed > Fix typo on Hadoop Site Help dropdown menu > -- > > Key: HADOOP-16302 > URL: https://issues.apache.org/jira/browse/HADOOP-16302 > Project: Hadoop Common > Issue Type: Bug > Components: site >Affects Versions: asf-site >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Minor > Attachments: Screen Shot 2019-05-07 at 11.57.01 PM.png > > > On hadoop.apache.org the Help tab on top menu bar has Sponsorship spelt as > Sponsorshop. > This jira aims to fix this typo. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16302) Fix typo on Hadoop Site Help dropdown menu
[ https://issues.apache.org/jira/browse/HADOOP-16302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835297#comment-16835297 ] Bharat Viswanadham commented on HADOOP-16302: - Thank You [~dineshchitlangia] for the fix. I have committed this. > Fix typo on Hadoop Site Help dropdown menu > -- > > Key: HADOOP-16302 > URL: https://issues.apache.org/jira/browse/HADOOP-16302 > Project: Hadoop Common > Issue Type: Bug > Components: site >Affects Versions: asf-site >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Minor > Attachments: Screen Shot 2019-05-07 at 11.57.01 PM.png > > > On hadoop.apache.org the Help tab on top menu bar has Sponsorship spelt as > Sponsorshop. > This jira aims to fix this typo. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-13656) fs -expunge to take a filesystem
[ https://issues.apache.org/jira/browse/HADOOP-13656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham reassigned HADOOP-13656: --- Assignee: Shweta (was: Bharat Viswanadham) > fs -expunge to take a filesystem > > > Key: HADOOP-13656 > URL: https://issues.apache.org/jira/browse/HADOOP-13656 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: Shweta >Priority: Minor > Attachments: HADOOP-13656.001.patch, HADOOP-13656.002.patch, > HADOOP-13656.003.patch, HADOOP-13656.004.patch, HADOOP-13656.005.patch > > > you can't pass in a filesystem or object store to {{fs -expunge}; you have to > change the default fs > {code} > hadoop fs -expunge -D fs.defaultFS=s3a://bucket/ > {code} > If the command took an optional filesystem argument, it'd be better at > cleaning up object stores. Given that even deleted object store data runs up > bills, this could be appreciated. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-13656) fs -expunge to take a filesystem
[ https://issues.apache.org/jira/browse/HADOOP-13656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham reassigned HADOOP-13656: --- Assignee: Bharat Viswanadham (was: Shweta) > fs -expunge to take a filesystem > > > Key: HADOOP-13656 > URL: https://issues.apache.org/jira/browse/HADOOP-13656 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HADOOP-13656.001.patch, HADOOP-13656.002.patch, > HADOOP-13656.003.patch, HADOOP-13656.004.patch, HADOOP-13656.005.patch > > > you can't pass in a filesystem or object store to {{fs -expunge}; you have to > change the default fs > {code} > hadoop fs -expunge -D fs.defaultFS=s3a://bucket/ > {code} > If the command took an optional filesystem argument, it'd be better at > cleaning up object stores. Given that even deleted object store data runs up > bills, this could be appreciated. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16243) Change Log Level to trace in NetUtils.java
[ https://issues.apache.org/jira/browse/HADOOP-16243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-16243: Resolution: Fixed Fix Version/s: 3.3.0 Status: Resolved (was: Patch Available) This has been committed to trunk by [~arpitagarwal] Thank You [~candychencan] for the fix and [~arpitagarwal] for the review and commit. > Change Log Level to trace in NetUtils.java > -- > > Key: HADOOP-16243 > URL: https://issues.apache.org/jira/browse/HADOOP-16243 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: chencan >Priority: Major > Labels: newbie > Fix For: 3.3.0 > > Attachments: HDDS-1407.001.patch > > > When there is no String Constructor for the exception, we Log a Warn Message, > and rethrow the exception. We can change the Log level to TRACE/DEBUG. > > {code:java} > private static T wrapWithMessage( > T exception, String msg) throws T { > Class clazz = exception.getClass(); > try { > Constructor ctor = clazz.getConstructor(String.class); > Throwable t = ctor.newInstance(msg); > return (T)(t.initCause(exception)); > } catch (Throwable e) { > LOG.warn("Unable to wrap exception of type {}: it has no (String) " > + "constructor", clazz, e); > throw exception; > } > }{code} > {code:java} > 2019-04-09 18:07:27,824 WARN ipc.Client > (Client.java:handleConnectionFailure(938)) - Interrupted while trying for > connection > 2019-04-09 18:07:27,826 WARN net.NetUtils > (NetUtils.java:wrapWithMessage(834)) - Unable to wrap exception of type class > java.nio.channels.ClosedByInterruptException: it has no (String) constructor > java.lang.NoSuchMethodException: > java.nio.channels.ClosedByInterruptException.(java.lang.String) > at java.lang.Class.getConstructor0(Class.java:3082) > at java.lang.Class.getConstructor(Class.java:1825) > at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:830) > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806) > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515) > at org.apache.hadoop.ipc.Client.call(Client.java:1457) > at org.apache.hadoop.ipc.Client.call(Client.java:1367) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) > at com.sun.proxy.$Proxy84.register(Unknown Source) > at > org.apache.hadoop.ozone.protocolPB.StorageContainerDatanodeProtocolClientSideTranslatorPB.register(StorageContainerDatanodeProtocolClientSideTranslatorPB.java:160) > at > org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:120) > at > org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:47) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16198) Upgrade Jackson-databind version to 2.9.8
[ https://issues.apache.org/jira/browse/HADOOP-16198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham resolved HADOOP-16198. - Resolution: Duplicate > Upgrade Jackson-databind version to 2.9.8 > - > > Key: HADOOP-16198 > URL: https://issues.apache.org/jira/browse/HADOOP-16198 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > > Jackson-databind 2.9.8 has a few fixes which are important to include. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16198) Upgrade Jackson-databind version to 2.9.8
Bharat Viswanadham created HADOOP-16198: --- Summary: Upgrade Jackson-databind version to 2.9.8 Key: HADOOP-16198 URL: https://issues.apache.org/jira/browse/HADOOP-16198 Project: Hadoop Common Issue Type: Bug Reporter: Bharat Viswanadham Assignee: Bharat Viswanadham Jackson-databind is affected by below CVEs and are getting reported by Customers. CVE-2018-14719 CVE-2018-14720 CVE-2018-14721 CVE-2018-1000873 CVE-2018-7489 CVE-2018-19362 CVE-2017-15095 CVE-2018-19361 CVE-2017-7525 CVE-2018-19360 CVE-2017-17485 CVE-2018-5968 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16198) Upgrade Jackson-databind version to 2.9.8
[ https://issues.apache.org/jira/browse/HADOOP-16198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-16198: Description: Jackson-databind is affected by below CVEs and are getting reported by Customers. CVE-2018-14719 CVE-2018-14720 CVE-2018-14721 CVE-2018-1000873 CVE-2018-7489 CVE-2018-19362 CVE-2017-15095 CVE-2018-19361 CVE-2017-7525 CVE-2018-19360 CVE-2017-17485 CVE-2018-5968 We need to upgrade this to version 2.9.8. was: Jackson-databind is affected by below CVEs and are getting reported by Customers. CVE-2018-14719 CVE-2018-14720 CVE-2018-14721 CVE-2018-1000873 CVE-2018-7489 CVE-2018-19362 CVE-2017-15095 CVE-2018-19361 CVE-2017-7525 CVE-2018-19360 CVE-2017-17485 CVE-2018-5968 > Upgrade Jackson-databind version to 2.9.8 > - > > Key: HADOOP-16198 > URL: https://issues.apache.org/jira/browse/HADOOP-16198 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > > Jackson-databind is affected by below CVEs and are getting reported by > Customers. > CVE-2018-14719 > CVE-2018-14720 > CVE-2018-14721 > CVE-2018-1000873 > CVE-2018-7489 > CVE-2018-19362 > CVE-2017-15095 > CVE-2018-19361 > CVE-2017-7525 > CVE-2018-19360 > CVE-2017-17485 > CVE-2018-5968 > > We need to upgrade this to version 2.9.8. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16075) Upgrade checkstyle version to 8.16
[ https://issues.apache.org/jira/browse/HADOOP-16075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16752695#comment-16752695 ] Bharat Viswanadham commented on HADOOP-16075: - Any reason why need this? Just asking if it caused any issues with the current version. > Upgrade checkstyle version to 8.16 > -- > > Key: HADOOP-16075 > URL: https://issues.apache.org/jira/browse/HADOOP-16075 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.2.0 >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Minor > Attachments: HADOOP-16075.00.patch > > > Jira aims to upgrade checkstyle version from 8.8 to 8.16. > It is a minor upgrade with some bug fixes in checkstyle and its a negligible > risk change. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15990) S3AFileSystem.verifyBucketExists to move to s3.doesBucketExistV2
[ https://issues.apache.org/jira/browse/HADOOP-15990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742761#comment-16742761 ] Bharat Viswanadham commented on HADOOP-15990: - Yes, we support only V2 list in S3 Gateway. > S3AFileSystem.verifyBucketExists to move to s3.doesBucketExistV2 > > > Key: HADOOP-15990 > URL: https://issues.apache.org/jira/browse/HADOOP-15990 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: lqjacklee >Priority: Major > Attachments: HADOOP-15409-005.patch, HADOOP-15990-006.patch > > > in S3AFileSystem.initialize(), we check for the bucket existing with > verifyBucketExists(), which calls s3.doesBucketExist(). But that doesn't > check for auth issues. > s3. doesBucketExistV2() does at least validate credentials, and should be > switched to. This will help things fail faster > See SPARK-24000 > (this is a dupe of HADOOP-15409; moving off git PRs so we can get yetus to > test everything) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16014) Fix test, checkstyle and javadoc issues in TestKerberosAuthenticationHandler
[ https://issues.apache.org/jira/browse/HADOOP-16014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-16014: Fix Version/s: (was: 3.2.1) 3.3.0 > Fix test, checkstyle and javadoc issues in TestKerberosAuthenticationHandler > > > Key: HADOOP-16014 > URL: https://issues.apache.org/jira/browse/HADOOP-16014 > Project: Hadoop Common > Issue Type: Improvement > Components: test >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-16014.001.patch, HADOOP-16014.002.patch > > > TestKerberosAuthenticationHandler has multiple checkstyle violations, missing > javadoc and some tests are not annotated with @Test thus not being run. > This jira aims to fix above issues. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16014) Fix test, checkstyle and javadoc issues in TestKerberosAuthenticationHandler
[ https://issues.apache.org/jira/browse/HADOOP-16014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-16014: Resolution: Fixed Fix Version/s: 3.2.1 Status: Resolved (was: Patch Available) Thank You [~dineshchitlangia] for the contribution and [~knanasi] for the review. > Fix test, checkstyle and javadoc issues in TestKerberosAuthenticationHandler > > > Key: HADOOP-16014 > URL: https://issues.apache.org/jira/browse/HADOOP-16014 > Project: Hadoop Common > Issue Type: Improvement > Components: test >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Fix For: 3.2.1 > > Attachments: HADOOP-16014.001.patch, HADOOP-16014.002.patch > > > TestKerberosAuthenticationHandler has multiple checkstyle violations, missing > javadoc and some tests are not annotated with @Test thus not being run. > This jira aims to fix above issues. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-16014) Fix test, checkstyle and javadoc issues in TestKerberosAuthenticationHandler
[ https://issues.apache.org/jira/browse/HADOOP-16014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16727085#comment-16727085 ] Bharat Viswanadham edited comment on HADOOP-16014 at 12/21/18 9:52 PM: --- Thank You [~dineshchitlangia] for the contribution and [~knanasi] for the review. I have committed this to trunk. When I try to apply I have got compilation issue on branch-3.0. So, I have not proceeded further to commit this in other branches. If you need this change in other branches provide a patch for those branches. was (Author: bharatviswa): Thank You [~dineshchitlangia] for the contribution and [~knanasi] for the review. I have committed this to trunk. > Fix test, checkstyle and javadoc issues in TestKerberosAuthenticationHandler > > > Key: HADOOP-16014 > URL: https://issues.apache.org/jira/browse/HADOOP-16014 > Project: Hadoop Common > Issue Type: Improvement > Components: test >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Fix For: 3.2.1 > > Attachments: HADOOP-16014.001.patch, HADOOP-16014.002.patch > > > TestKerberosAuthenticationHandler has multiple checkstyle violations, missing > javadoc and some tests are not annotated with @Test thus not being run. > This jira aims to fix above issues. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-16014) Fix test, checkstyle and javadoc issues in TestKerberosAuthenticationHandler
[ https://issues.apache.org/jira/browse/HADOOP-16014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16727085#comment-16727085 ] Bharat Viswanadham edited comment on HADOOP-16014 at 12/21/18 9:46 PM: --- Thank You [~dineshchitlangia] for the contribution and [~knanasi] for the review. I have committed this to trunk. was (Author: bharatviswa): Thank You [~dineshchitlangia] for the contribution and [~knanasi] for the review. > Fix test, checkstyle and javadoc issues in TestKerberosAuthenticationHandler > > > Key: HADOOP-16014 > URL: https://issues.apache.org/jira/browse/HADOOP-16014 > Project: Hadoop Common > Issue Type: Improvement > Components: test >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Fix For: 3.2.1 > > Attachments: HADOOP-16014.001.patch, HADOOP-16014.002.patch > > > TestKerberosAuthenticationHandler has multiple checkstyle violations, missing > javadoc and some tests are not annotated with @Test thus not being run. > This jira aims to fix above issues. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16014) Fix test, checkstyle and javadoc issues in TestKerberosAuthenticationHandler
[ https://issues.apache.org/jira/browse/HADOOP-16014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16727023#comment-16727023 ] Bharat Viswanadham commented on HADOOP-16014: - +1 LGTM. I will commit this shortly. > Fix test, checkstyle and javadoc issues in TestKerberosAuthenticationHandler > > > Key: HADOOP-16014 > URL: https://issues.apache.org/jira/browse/HADOOP-16014 > Project: Hadoop Common > Issue Type: Improvement > Components: test >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Attachments: HADOOP-16014.001.patch, HADOOP-16014.002.patch > > > TestKerberosAuthenticationHandler has multiple checkstyle violations, missing > javadoc and some tests are not annotated with @Test thus not being run. > This jira aims to fix above issues. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15924) Hadoop aws cannot be used with shaded jars
[ https://issues.apache.org/jira/browse/HADOOP-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684558#comment-16684558 ] Bharat Viswanadham commented on HADOOP-15924: - Marking this as Patch Available to get Jenkins run on this. > Hadoop aws cannot be used with shaded jars > -- > > Key: HADOOP-15924 > URL: https://issues.apache.org/jira/browse/HADOOP-15924 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bharat Viswanadham >Priority: Major > Attachments: HADOOP-15924.00.patch > > > Issue is hadoop-aws cannot be used with shaded jars. > The recommended client side jars for hadoop 3 are client-api/runtime shaded > jars. > They shade guava etc. So something like SemaphoredDelegatingExecutor refers > to shaded guava classes. > hadoop-aws has S3AFileSystem implementation which refers to > SemaphoredDelegatingExecutor with unshaded guava ListeningService in the > constructor. When S3AFileSystem is created then it uses the hadoop-api jar > and finds SemaphoredDelegatingExecutor but not the right constructor because > in client-api jar SemaphoredDelegatingExecutor constructor has the shaded > guava ListenerService. > So essentially none of the aws/azure/adl hadoop FS implementations will work > with the shaded Hadoop client runtime jars. > > This Jira is created to track the work required to make hadoop-aws work with > hadoop shaded client jars. > > The solution for this can be, hadoop-aws depends on hadoop shaded jars. In > this way, we shall not see the issue. Currently, hadoop-aws depends on > aws-sdk-bundle and all other remaining jars are provided dependencies. > > cc [~steve_l] > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15924) Hadoop aws cannot be used with shaded jars
[ https://issues.apache.org/jira/browse/HADOOP-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-15924: Status: Patch Available (was: Open) > Hadoop aws cannot be used with shaded jars > -- > > Key: HADOOP-15924 > URL: https://issues.apache.org/jira/browse/HADOOP-15924 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bharat Viswanadham >Priority: Major > Attachments: HADOOP-15924.00.patch > > > Issue is hadoop-aws cannot be used with shaded jars. > The recommended client side jars for hadoop 3 are client-api/runtime shaded > jars. > They shade guava etc. So something like SemaphoredDelegatingExecutor refers > to shaded guava classes. > hadoop-aws has S3AFileSystem implementation which refers to > SemaphoredDelegatingExecutor with unshaded guava ListeningService in the > constructor. When S3AFileSystem is created then it uses the hadoop-api jar > and finds SemaphoredDelegatingExecutor but not the right constructor because > in client-api jar SemaphoredDelegatingExecutor constructor has the shaded > guava ListenerService. > So essentially none of the aws/azure/adl hadoop FS implementations will work > with the shaded Hadoop client runtime jars. > > This Jira is created to track the work required to make hadoop-aws work with > hadoop shaded client jars. > > The solution for this can be, hadoop-aws depends on hadoop shaded jars. In > this way, we shall not see the issue. Currently, hadoop-aws depends on > aws-sdk-bundle and all other remaining jars are provided dependencies. > > cc [~steve_l] > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15924) Hadoop aws cannot be used with shaded jars
[ https://issues.apache.org/jira/browse/HADOOP-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684542#comment-16684542 ] Bharat Viswanadham commented on HADOOP-15924: - not ran test suite against aws s3 endpoint, Just ran tests against s3 gateway endpoint to see if any CNFE errors we get. (Not seen any CNFE errors) > Hadoop aws cannot be used with shaded jars > -- > > Key: HADOOP-15924 > URL: https://issues.apache.org/jira/browse/HADOOP-15924 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bharat Viswanadham >Priority: Major > Attachments: HADOOP-15924.00.patch > > > Issue is hadoop-aws cannot be used with shaded jars. > The recommended client side jars for hadoop 3 are client-api/runtime shaded > jars. > They shade guava etc. So something like SemaphoredDelegatingExecutor refers > to shaded guava classes. > hadoop-aws has S3AFileSystem implementation which refers to > SemaphoredDelegatingExecutor with unshaded guava ListeningService in the > constructor. When S3AFileSystem is created then it uses the hadoop-api jar > and finds SemaphoredDelegatingExecutor but not the right constructor because > in client-api jar SemaphoredDelegatingExecutor constructor has the shaded > guava ListenerService. > So essentially none of the aws/azure/adl hadoop FS implementations will work > with the shaded Hadoop client runtime jars. > > This Jira is created to track the work required to make hadoop-aws work with > hadoop shaded client jars. > > The solution for this can be, hadoop-aws depends on hadoop shaded jars. In > this way, we shall not see the issue. Currently, hadoop-aws depends on > aws-sdk-bundle and all other remaining jars are provided dependencies. > > cc [~steve_l] > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15924) Hadoop aws cannot be used with shaded jars
[ https://issues.apache.org/jira/browse/HADOOP-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-15924: Attachment: HADOOP-15924.00.patch > Hadoop aws cannot be used with shaded jars > -- > > Key: HADOOP-15924 > URL: https://issues.apache.org/jira/browse/HADOOP-15924 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bharat Viswanadham >Priority: Major > Attachments: HADOOP-15924.00.patch > > > Issue is hadoop-aws cannot be used with shaded jars. > The recommended client side jars for hadoop 3 are client-api/runtime shaded > jars. > They shade guava etc. So something like SemaphoredDelegatingExecutor refers > to shaded guava classes. > hadoop-aws has S3AFileSystem implementation which refers to > SemaphoredDelegatingExecutor with unshaded guava ListeningService in the > constructor. When S3AFileSystem is created then it uses the hadoop-api jar > and finds SemaphoredDelegatingExecutor but not the right constructor because > in client-api jar SemaphoredDelegatingExecutor constructor has the shaded > guava ListenerService. > So essentially none of the aws/azure/adl hadoop FS implementations will work > with the shaded Hadoop client runtime jars. > > This Jira is created to track the work required to make hadoop-aws work with > hadoop shaded client jars. > > The solution for this can be, hadoop-aws depends on hadoop shaded jars. In > this way, we shall not see the issue. Currently, hadoop-aws depends on > aws-sdk-bundle and all other remaining jars are provided dependencies. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15924) Hadoop aws cannot be used with shaded jars
[ https://issues.apache.org/jira/browse/HADOOP-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-15924: Description: Issue is hadoop-aws cannot be used with shaded jars. The recommended client side jars for hadoop 3 are client-api/runtime shaded jars. They shade guava etc. So something like SemaphoredDelegatingExecutor refers to shaded guava classes. hadoop-aws has S3AFileSystem implementation which refers to SemaphoredDelegatingExecutor with unshaded guava ListeningService in the constructor. When S3AFileSystem is created then it uses the hadoop-api jar and finds SemaphoredDelegatingExecutor but not the right constructor because in client-api jar SemaphoredDelegatingExecutor constructor has the shaded guava ListenerService. So essentially none of the aws/azure/adl hadoop FS implementations will work with the shaded Hadoop client runtime jars. This Jira is created to track the work required to make hadoop-aws work with hadoop shaded client jars. The solution for this can be, hadoop-aws depends on hadoop shaded jars. In this way, we shall not see the issue. Currently, hadoop-aws depends on aws-sdk-bundle and all other remaining jars are provided dependencies. cc [~steve_l] was: Issue is hadoop-aws cannot be used with shaded jars. The recommended client side jars for hadoop 3 are client-api/runtime shaded jars. They shade guava etc. So something like SemaphoredDelegatingExecutor refers to shaded guava classes. hadoop-aws has S3AFileSystem implementation which refers to SemaphoredDelegatingExecutor with unshaded guava ListeningService in the constructor. When S3AFileSystem is created then it uses the hadoop-api jar and finds SemaphoredDelegatingExecutor but not the right constructor because in client-api jar SemaphoredDelegatingExecutor constructor has the shaded guava ListenerService. So essentially none of the aws/azure/adl hadoop FS implementations will work with the shaded Hadoop client runtime jars. This Jira is created to track the work required to make hadoop-aws work with hadoop shaded client jars. The solution for this can be, hadoop-aws depends on hadoop shaded jars. In this way, we shall not see the issue. Currently, hadoop-aws depends on aws-sdk-bundle and all other remaining jars are provided dependencies. > Hadoop aws cannot be used with shaded jars > -- > > Key: HADOOP-15924 > URL: https://issues.apache.org/jira/browse/HADOOP-15924 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bharat Viswanadham >Priority: Major > Attachments: HADOOP-15924.00.patch > > > Issue is hadoop-aws cannot be used with shaded jars. > The recommended client side jars for hadoop 3 are client-api/runtime shaded > jars. > They shade guava etc. So something like SemaphoredDelegatingExecutor refers > to shaded guava classes. > hadoop-aws has S3AFileSystem implementation which refers to > SemaphoredDelegatingExecutor with unshaded guava ListeningService in the > constructor. When S3AFileSystem is created then it uses the hadoop-api jar > and finds SemaphoredDelegatingExecutor but not the right constructor because > in client-api jar SemaphoredDelegatingExecutor constructor has the shaded > guava ListenerService. > So essentially none of the aws/azure/adl hadoop FS implementations will work > with the shaded Hadoop client runtime jars. > > This Jira is created to track the work required to make hadoop-aws work with > hadoop shaded client jars. > > The solution for this can be, hadoop-aws depends on hadoop shaded jars. In > this way, we shall not see the issue. Currently, hadoop-aws depends on > aws-sdk-bundle and all other remaining jars are provided dependencies. > > cc [~steve_l] > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15924) Hadoop aws cannot be used with shaded jars
Bharat Viswanadham created HADOOP-15924: --- Summary: Hadoop aws cannot be used with shaded jars Key: HADOOP-15924 URL: https://issues.apache.org/jira/browse/HADOOP-15924 Project: Hadoop Common Issue Type: Bug Reporter: Bharat Viswanadham Issue is hadoop-aws cannot be used with shaded jars. The recommended client side jars for hadoop 3 are client-api/runtime shaded jars. They shade guava etc. So something like SemaphoredDelegatingExecutor refers to shaded guava classes. hadoop-aws has S3AFileSystem implementation which refers to SemaphoredDelegatingExecutor with unshaded guava ListeningService in the constructor. When S3AFileSystem is created then it uses the hadoop-api jar and finds SemaphoredDelegatingExecutor but not the right constructor because in client-api jar SemaphoredDelegatingExecutor constructor has the shaded guava ListenerService. So essentially none of the aws/azure/adl hadoop FS implementations will work with the shaded Hadoop client runtime jars. This Jira is created to track the work required to make hadoop-aws work with hadoop shaded client jars. The solution for this can be, hadoop-aws depends on hadoop shaded jars. In this way, we shall not see the issue. Currently, hadoop-aws depends on aws-sdk-bundle and all other remaining jars are provided dependencies. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15815) Upgrade Eclipse Jetty version to 9.3.24
[ https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16665424#comment-16665424 ] Bharat Viswanadham commented on HADOOP-15815: - Thank You [~sunilg] for taking care of this. > Upgrade Eclipse Jetty version to 9.3.24 > --- > > Key: HADOOP-15815 > URL: https://issues.apache.org/jira/browse/HADOOP-15815 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.1.1, 3.0.3 >Reporter: Boris Vulikh >Assignee: Boris Vulikh >Priority: Major > Fix For: 3.2.0, 3.0.4, 3.3.0, 3.1.2 > > Attachments: HADOOP-15815.01-2.patch > > > * > [CVE-2017-7657|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7657] > * > [CVE-2017-7658|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7658] > * > [CVE-2017-7656|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7656] > * > [CVE-2018-12536|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12536] > We should upgrade the dependency to version 9.3.24 or the latest, if possible. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns
[ https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16664304#comment-16664304 ] Bharat Viswanadham edited comment on HADOOP-15815 at 10/25/18 11:19 PM: +1 LGTM. I have compiled and was able to build successfully after HADOOP-15882 went in. Edit: I will wait for a day, If there are no objections I will commit by end of day tomorrow. was (Author: bharatviswa): +1 LGTM. I have compiled and was able to build successfully after HADOOP-15882 went in. > Upgrade Eclipse Jetty version due to security concerns > -- > > Key: HADOOP-15815 > URL: https://issues.apache.org/jira/browse/HADOOP-15815 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.1.1, 3.0.3 >Reporter: Boris Vulikh >Assignee: Boris Vulikh >Priority: Major > Attachments: HADOOP-15815.01-2.patch > > > * > [CVE-2017-7657|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7657] > * > [CVE-2017-7658|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7658] > * > [CVE-2017-7656|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7656] > * > [CVE-2018-12536|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12536] > We should upgrade the dependency to version 9.3.24 or the latest, if possible. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns
[ https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16664304#comment-16664304 ] Bharat Viswanadham commented on HADOOP-15815: - +1 LGTM. I have compiled and was able to build successfully after HADOOP-15882 went in. > Upgrade Eclipse Jetty version due to security concerns > -- > > Key: HADOOP-15815 > URL: https://issues.apache.org/jira/browse/HADOOP-15815 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.1.1, 3.0.3 >Reporter: Boris Vulikh >Assignee: Boris Vulikh >Priority: Major > Attachments: HADOOP-15815.01-2.patch > > > * > [CVE-2017-7657|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7657] > * > [CVE-2017-7658|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7658] > * > [CVE-2017-7656|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7656] > * > [CVE-2018-12536|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12536] > We should upgrade the dependency to version 9.3.24 or the latest, if possible. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15882) Upgrade maven-shade-plugin from 2.4.3 to 3.2.0
[ https://issues.apache.org/jira/browse/HADOOP-15882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-15882: Resolution: Fixed Fix Version/s: 3.1.2 3.3.0 3.0.4 3.2.0 Status: Resolved (was: Patch Available) Thank You [~tasanuma0829] for filing and providing fix. I have committed this to trunk, branch-3.0, branch-3.1 and branch-3.2 > Upgrade maven-shade-plugin from 2.4.3 to 3.2.0 > -- > > Key: HADOOP-15882 > URL: https://issues.apache.org/jira/browse/HADOOP-15882 > Project: Hadoop Common > Issue Type: Task >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Fix For: 3.2.0, 3.0.4, 3.3.0, 3.1.2 > > Attachments: HADOOP-15882.1.patch > > > While working on HADOOP-15815, we have faced a shaded-client error. Please > see [~bharatviswa]'s comment > [here|https://issues.apache.org/jira/browse/HADOOP-15815?focusedCommentId=16662718=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16662718]. > MSHADE-242 and MSHADE-258 are needed to fix it. Let's upgrade > maven-shade-plugin to 3.1.0 or later. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns
[ https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16664083#comment-16664083 ] Bharat Viswanadham edited comment on HADOOP-15815 at 10/25/18 5:52 PM: --- Hi Sunil, Yes we need HADOOP-15882 to get this change committed. There will be no impact to UI with this change. This Jira upgrades eclipse jetty due to above mentioned CVE's, and other update shaded plugin version. was (Author: bharatviswa): Hi Sunil, There will be no impact to UI with this change. This Jira upgrades eclipse jetty due to above mentioned CVE's, and other update shaded plugin version. > Upgrade Eclipse Jetty version due to security concerns > -- > > Key: HADOOP-15815 > URL: https://issues.apache.org/jira/browse/HADOOP-15815 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.1.1, 3.0.3 >Reporter: Boris Vulikh >Assignee: Boris Vulikh >Priority: Major > Attachments: HADOOP-15815.01-2.patch > > > * > [CVE-2017-7657|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7657] > * > [CVE-2017-7658|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7658] > * > [CVE-2017-7656|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7656] > * > [CVE-2018-12536|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12536] > We should upgrade the dependency to version 9.3.24 or the latest, if possible. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns
[ https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16664083#comment-16664083 ] Bharat Viswanadham commented on HADOOP-15815: - Hi Sunil, There will be no impact to UI with this change. This Jira upgrades eclipse jetty due to above mentioned CVE's, and other update shaded plugin version. > Upgrade Eclipse Jetty version due to security concerns > -- > > Key: HADOOP-15815 > URL: https://issues.apache.org/jira/browse/HADOOP-15815 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.1.1, 3.0.3 >Reporter: Boris Vulikh >Assignee: Boris Vulikh >Priority: Major > Attachments: HADOOP-15815.01-2.patch > > > * > [CVE-2017-7657|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7657] > * > [CVE-2017-7658|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7658] > * > [CVE-2017-7656|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7656] > * > [CVE-2018-12536|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12536] > We should upgrade the dependency to version 9.3.24 or the latest, if possible. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15882) Upgrade maven-shade-plugin from 2.4.3 to 3.2.0
[ https://issues.apache.org/jira/browse/HADOOP-15882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16664078#comment-16664078 ] Bharat Viswanadham commented on HADOOP-15882: - +1 LGTM. I will commit this shortly. > Upgrade maven-shade-plugin from 2.4.3 to 3.2.0 > -- > > Key: HADOOP-15882 > URL: https://issues.apache.org/jira/browse/HADOOP-15882 > Project: Hadoop Common > Issue Type: Task >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Attachments: HADOOP-15882.1.patch > > > While working on HADOOP-15815, we have faced a shaded-client error. Please > see [~bharatviswa]'s comment > [here|https://issues.apache.org/jira/browse/HADOOP-15815?focusedCommentId=16662718=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16662718]. > MSHADE-242 and MSHADE-258 are needed to fix it. Let's upgrade > maven-shade-plugin to 3.1.0 or later. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns
[ https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662946#comment-16662946 ] Bharat Viswanadham edited comment on HADOOP-15815 at 10/24/18 10:48 PM: >From the Jira comments MSHADE-242 is happening in case of minifying the jar, this issue happens when relocating classes of jars with a module descriptor. This will also mean that it'll break the intended strong encapsulation. Java 9 will not provide a solution for that yet, so I guess we'll have to log a warning as well. It might impact Java 9, as this oms 6 supports Java 9. But I have not completely understood the issue or will it affect Java 8? Is this what you are asking [~busbey] Ping [~ajisakaa] and [~tasanuma0829] for help on this to know any impact it will create just by upgrading the maven shaded plugin version and ignore this warning. was (Author: bharatviswa): >From the Jira comments MSHADE-242 is happening in case of minifying the jar, this issue happens when relocating classes of jars with a module descriptor. This will also mean that it'll break the intended strong encapsulation. Java 9 will not provide a solution for that yet, so I guess we'll have to log a warning as well. It might impact Java 9, as this oms 6 supports java 9. But I have not completely understood the issue or will it affect Java 8? Is this what you are asking [~busbey] Tagging [~ajisakaa] for help on this to know any impact it will create just by upgrading the maven shaded plugin version and ignore this warning. > Upgrade Eclipse Jetty version due to security concerns > -- > > Key: HADOOP-15815 > URL: https://issues.apache.org/jira/browse/HADOOP-15815 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.1.1, 3.0.3 >Reporter: Boris Vulikh >Assignee: Boris Vulikh >Priority: Major > Attachments: HADOOP-15815.01-2.patch > > > * > [CVE-2017-7657|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7657] > * > [CVE-2017-7658|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7658] > * > [CVE-2017-7656|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7656] > * > [CVE-2018-12536|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12536] > We should upgrade the dependency to version 9.3.24 or the latest, if possible. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns
[ https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662946#comment-16662946 ] Bharat Viswanadham commented on HADOOP-15815: - >From the Jira comments MSHADE-242 is happening in case of minifying the jar, this issue happens when relocating classes of jars with a module descriptor. This will also mean that it'll break the intended strong encapsulation. Java 9 will not provide a solution for that yet, so I guess we'll have to log a warning as well. It might impact Java 9, as this oms 6 supports java 9. But I have not completely understood the issue or will it affect Java 8? Is this what you are asking [~busbey] Tagging [~ajisakaa] for help on this to know any impact it will create just by upgrading the maven shaded plugin version and ignore this warning. > Upgrade Eclipse Jetty version due to security concerns > -- > > Key: HADOOP-15815 > URL: https://issues.apache.org/jira/browse/HADOOP-15815 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.1.1, 3.0.3 >Reporter: Boris Vulikh >Assignee: Boris Vulikh >Priority: Major > Attachments: HADOOP-15815.01-2.patch > > > * > [CVE-2017-7657|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7657] > * > [CVE-2017-7658|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7658] > * > [CVE-2017-7656|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7656] > * > [CVE-2018-12536|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12536] > We should upgrade the dependency to version 9.3.24 or the latest, if possible. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns
[ https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662718#comment-16662718 ] Bharat Viswanadham edited comment on HADOOP-15815 at 10/24/18 7:26 PM: --- I also see the same issue when applying patch. But when I have upgraded maven shaded plugin version to 3.1.0 this resolved this issue https://issues.apache.org/jira/browse/MSHADE-258 This will happen when a jar has with a module descriptor. The Jira also mentioned the same issue when using jar with module descriptor (same asm jar) This is happening exactly after asm jar. When I have checked the jar it has moduleinfo.class. So, upgrading maven-shaded-plugin will resolve this issue. And coming to why we are seeing this issue with this patch because jetty 9.3.24.v20180605 depends on osm 6.0 jar which has moduleinfo.class, Whereas from 9.3.19 we get asm jar 5.0.1 which does not have moduleinfo.class. {code:java} HW13865:Downloads bviswanadham$ jar -tf asm-commons-6.0.jar | grep "module" module-info.class {code} {code:java} HW13865:Downloads bviswanadham$ jar -tf asm-commons-5.0.jar | grep "module" HW13865:Downloads bviswanadham$ {code} {code:java} [INFO] +- org.apache.hadoop:hadoop-yarn-server-nodemanager:jar:3.3.0-SNAPSHOT:compile (optional) [INFO] | +- org.eclipse.jetty.websocket:javax-websocket-server-impl:jar:9.3.24.v20180605:compile [INFO] | | +- org.eclipse.jetty:jetty-annotations:jar:9.3.24.v20180605:compile [INFO] | | | +- org.eclipse.jetty:jetty-plus:jar:9.3.24.v20180605:compile [INFO] | | | | \- org.eclipse.jetty:jetty-jndi:jar:9.3.24.v20180605:compile [INFO] | | | +- javax.annotation:javax.annotation-api:jar:1.2:compile [INFO] | | | \- org.ow2.asm:asm-commons:jar:6.0:compile [INFO] | | | \- org.ow2.asm:asm-tree:jar:6.0:compile{code} {code:java} [INFO] +- org.apache.hadoop:hadoop-yarn-server-nodemanager:jar:3.3.0-SNAPSHOT:compile (optional) [INFO] | +- org.eclipse.jetty.websocket:javax-websocket-server-impl:jar:9.3.19.v20170502:compile [INFO] | | +- org.eclipse.jetty:jetty-annotations:jar:9.3.19.v20170502:compile [INFO] | | | +- org.eclipse.jetty:jetty-plus:jar:9.3.19.v20170502:compile [INFO] | | | | \- org.eclipse.jetty:jetty-jndi:jar:9.3.19.v20170502:compile [INFO] | | | +- javax.annotation:javax.annotation-api:jar:1.2:compile [INFO] | | | \- org.ow2.asm:asm-commons:jar:5.0.1:compile [INFO] | | | \- org.ow2.asm:asm-tree:jar:5.0.1:compile{code} So, I think to resolve this we upgrade to latest maven-shaded-plugin like 3.1.0 which can resolve this issue. {code:java} [DEBUG] Processing JAR /Users/bviswanadham/.m2/repository/org/ow2/asm/asm-commons/6.0/asm-commons-6.0.jar [INFO] [INFO] BUILD FAILURE [INFO] [INFO] Total time: 01:27 min [INFO] Finished at: 2018-10-24T12:10:58-07:00 [INFO] Final Memory: 51M/1642M [INFO] [ERROR] Failed to execute goal org.apache.maven.plugins:maven-shade-plugin:2.4.3:shade (default) on project hadoop-client-minicluster: Error creating shaded jar: null: IllegalArgumentException -> [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-shade-plugin:2.4.3:shade (default) on project hadoop-client-minicluster: Error creating shaded jar: null at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:213) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:154) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:146) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81) at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:309) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:194) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:107) at org.apache.maven.cli.MavenCli.execute(MavenCli.java:993) at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:345) at org.apache.maven.cli.MavenCli.main(MavenCli.java:191) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289) at
[jira] [Commented] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns
[ https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662718#comment-16662718 ] Bharat Viswanadham commented on HADOOP-15815: - I also see the same issue when applying patch. But when I have upgraded maven shaded plugin version to 3.1.0 this resolved this issue https://issues.apache.org/jira/browse/MSHADE-258 This will happen when a jar has with a module descriptor. The Jira This is happening exactly after asm jar. When I have checked the jar it has moduleinfo.class {code:java} HW13865:Downloads bviswanadham$ jar -tf asm-commons-6.0.jar | grep "module" module-info.class {code} So, upgrading maven-shaded-plugin will resolve this issue. And we are seeing this issue with this patch because jetty 9.3.24.v20180605 depends on osm 6.0 jar which has moduleinfo.class, Where as from 9.3.19 we get asm jar 5.0.1 which does not have moduleinfo.class. {code:java} [INFO] +- org.apache.hadoop:hadoop-yarn-server-nodemanager:jar:3.3.0-SNAPSHOT:compile (optional) [INFO] | +- org.eclipse.jetty.websocket:javax-websocket-server-impl:jar:9.3.24.v20180605:compile [INFO] | | +- org.eclipse.jetty:jetty-annotations:jar:9.3.24.v20180605:compile [INFO] | | | +- org.eclipse.jetty:jetty-plus:jar:9.3.24.v20180605:compile [INFO] | | | | \- org.eclipse.jetty:jetty-jndi:jar:9.3.24.v20180605:compile [INFO] | | | +- javax.annotation:javax.annotation-api:jar:1.2:compile [INFO] | | | \- org.ow2.asm:asm-commons:jar:6.0:compile [INFO] | | | \- org.ow2.asm:asm-tree:jar:6.0:compile{code} {code:java} [INFO] +- org.apache.hadoop:hadoop-yarn-server-nodemanager:jar:3.3.0-SNAPSHOT:compile (optional) [INFO] | +- org.eclipse.jetty.websocket:javax-websocket-server-impl:jar:9.3.19.v20170502:compile [INFO] | | +- org.eclipse.jetty:jetty-annotations:jar:9.3.19.v20170502:compile [INFO] | | | +- org.eclipse.jetty:jetty-plus:jar:9.3.19.v20170502:compile [INFO] | | | | \- org.eclipse.jetty:jetty-jndi:jar:9.3.19.v20170502:compile [INFO] | | | +- javax.annotation:javax.annotation-api:jar:1.2:compile [INFO] | | | \- org.ow2.asm:asm-commons:jar:5.0.1:compile [INFO] | | | \- org.ow2.asm:asm-tree:jar:5.0.1:compile{code} So, I think to resolve this we upgrade to latest maven-shaded-plugin like 3.1.0 which can resolve this issue. {code:java} [DEBUG] Processing JAR /Users/bviswanadham/.m2/repository/org/ow2/asm/asm-commons/6.0/asm-commons-6.0.jar [INFO] [INFO] BUILD FAILURE [INFO] [INFO] Total time: 01:27 min [INFO] Finished at: 2018-10-24T12:10:58-07:00 [INFO] Final Memory: 51M/1642M [INFO] [ERROR] Failed to execute goal org.apache.maven.plugins:maven-shade-plugin:2.4.3:shade (default) on project hadoop-client-minicluster: Error creating shaded jar: null: IllegalArgumentException -> [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-shade-plugin:2.4.3:shade (default) on project hadoop-client-minicluster: Error creating shaded jar: null at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:213) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:154) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:146) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81) at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:309) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:194) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:107) at org.apache.maven.cli.MavenCli.execute(MavenCli.java:993) at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:345) at org.apache.maven.cli.MavenCli.main(MavenCli.java:191) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415) at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356) Caused by:
[jira] [Commented] (HADOOP-15879) Upgrade eclipse jetty version to 9.3.25.v20180904
[ https://issues.apache.org/jira/browse/HADOOP-15879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662647#comment-16662647 ] Bharat Viswanadham commented on HADOOP-15879: - Thank You [~jeagles] I have closed this jira. > Upgrade eclipse jetty version to 9.3.25.v20180904 > - > > Key: HADOOP-15879 > URL: https://issues.apache.org/jira/browse/HADOOP-15879 > Project: Hadoop Common > Issue Type: Task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HADOOP-15879.00.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15879) Upgrade eclipse jetty version to 9.3.25.v20180904
[ https://issues.apache.org/jira/browse/HADOOP-15879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-15879: Resolution: Duplicate Status: Resolved (was: Patch Available) > Upgrade eclipse jetty version to 9.3.25.v20180904 > - > > Key: HADOOP-15879 > URL: https://issues.apache.org/jira/browse/HADOOP-15879 > Project: Hadoop Common > Issue Type: Task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HADOOP-15879.00.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15879) Upgrade eclipse jetty version to 9.3.25.v20180904
[ https://issues.apache.org/jira/browse/HADOOP-15879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-15879: Attachment: HADOOP-15879.00.patch > Upgrade eclipse jetty version to 9.3.25.v20180904 > - > > Key: HADOOP-15879 > URL: https://issues.apache.org/jira/browse/HADOOP-15879 > Project: Hadoop Common > Issue Type: Task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HADOOP-15879.00.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15879) Upgrade eclipse jetty version to 9.3.25.v20180904
[ https://issues.apache.org/jira/browse/HADOOP-15879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-15879: Status: Patch Available (was: Open) > Upgrade eclipse jetty version to 9.3.25.v20180904 > - > > Key: HADOOP-15879 > URL: https://issues.apache.org/jira/browse/HADOOP-15879 > Project: Hadoop Common > Issue Type: Task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HADOOP-15879.00.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15879) Upgrade eclipse jetty version to 9.3.25.v20180904
Bharat Viswanadham created HADOOP-15879: --- Summary: Upgrade eclipse jetty version to 9.3.25.v20180904 Key: HADOOP-15879 URL: https://issues.apache.org/jira/browse/HADOOP-15879 Project: Hadoop Common Issue Type: Task Reporter: Bharat Viswanadham -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-15879) Upgrade eclipse jetty version to 9.3.25.v20180904
[ https://issues.apache.org/jira/browse/HADOOP-15879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham reassigned HADOOP-15879: --- Assignee: Bharat Viswanadham > Upgrade eclipse jetty version to 9.3.25.v20180904 > - > > Key: HADOOP-15879 > URL: https://issues.apache.org/jira/browse/HADOOP-15879 > Project: Hadoop Common > Issue Type: Task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15673) Hadoop:3 image is missing from dockerhub
[ https://issues.apache.org/jira/browse/HADOOP-15673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-15673: Status: Patch Available (was: Open) > Hadoop:3 image is missing from dockerhub > > > Key: HADOOP-15673 > URL: https://issues.apache.org/jira/browse/HADOOP-15673 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Elek, Marton >Assignee: Bharat Viswanadham >Priority: Major > Labels: newbie > Attachments: HADOOP-15673-docker-hadoop-3.00.patch > > > Currently the apache/hadoop:3 image is missing from the dockerhub as the > Dockerfile in docker-hadoop-3 branch contains the outdated 3.0.0 download > url. It should be updated to the latest 3.1.1 url. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15673) Hadoop:3 image is missing from dockerhub
[ https://issues.apache.org/jira/browse/HADOOP-15673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-15673: Attachment: (was: HADOOP-15673.00.patch) > Hadoop:3 image is missing from dockerhub > > > Key: HADOOP-15673 > URL: https://issues.apache.org/jira/browse/HADOOP-15673 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Elek, Marton >Assignee: Bharat Viswanadham >Priority: Major > Labels: newbie > Attachments: HADOOP-15673-docker-hadoop-3.00.patch > > > Currently the apache/hadoop:3 image is missing from the dockerhub as the > Dockerfile in docker-hadoop-3 branch contains the outdated 3.0.0 download > url. It should be updated to the latest 3.1.1 url. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15673) Hadoop:3 image is missing from dockerhub
[ https://issues.apache.org/jira/browse/HADOOP-15673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-15673: Attachment: HADOOP-15673-docker-hadoop-3.00.patch > Hadoop:3 image is missing from dockerhub > > > Key: HADOOP-15673 > URL: https://issues.apache.org/jira/browse/HADOOP-15673 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Elek, Marton >Assignee: Bharat Viswanadham >Priority: Major > Labels: newbie > Attachments: HADOOP-15673-docker-hadoop-3.00.patch > > > Currently the apache/hadoop:3 image is missing from the dockerhub as the > Dockerfile in docker-hadoop-3 branch contains the outdated 3.0.0 download > url. It should be updated to the latest 3.1.1 url. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15673) Hadoop:3 image is missing from dockerhub
[ https://issues.apache.org/jira/browse/HADOOP-15673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615467#comment-16615467 ] Bharat Viswanadham commented on HADOOP-15673: - [~elek] Updated to change the urls to 3.1.1. > Hadoop:3 image is missing from dockerhub > > > Key: HADOOP-15673 > URL: https://issues.apache.org/jira/browse/HADOOP-15673 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Elek, Marton >Assignee: Bharat Viswanadham >Priority: Major > Labels: newbie > Attachments: HADOOP-15673-docker-hadoop-3.00.patch > > > Currently the apache/hadoop:3 image is missing from the dockerhub as the > Dockerfile in docker-hadoop-3 branch contains the outdated 3.0.0 download > url. It should be updated to the latest 3.1.1 url. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15673) Hadoop:3 image is missing from dockerhub
[ https://issues.apache.org/jira/browse/HADOOP-15673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-15673: Attachment: HADOOP-15673.00.patch > Hadoop:3 image is missing from dockerhub > > > Key: HADOOP-15673 > URL: https://issues.apache.org/jira/browse/HADOOP-15673 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Elek, Marton >Assignee: Bharat Viswanadham >Priority: Major > Labels: newbie > Attachments: HADOOP-15673.00.patch > > > Currently the apache/hadoop:3 image is missing from the dockerhub as the > Dockerfile in docker-hadoop-3 branch contains the outdated 3.0.0 download > url. It should be updated to the latest 3.1.1 url. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-15673) Hadoop:3 image is missing from dockerhub
[ https://issues.apache.org/jira/browse/HADOOP-15673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham reassigned HADOOP-15673: --- Assignee: Bharat Viswanadham > Hadoop:3 image is missing from dockerhub > > > Key: HADOOP-15673 > URL: https://issues.apache.org/jira/browse/HADOOP-15673 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Elek, Marton >Assignee: Bharat Viswanadham >Priority: Major > Labels: newbie > > Currently the apache/hadoop:3 image is missing from the dockerhub as the > Dockerfile in docker-hadoop-3 branch contains the outdated 3.0.0 download > url. It should be updated to the latest 3.1.1 url. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-15139) [Umbrella] Improvements and fixes for Hadoop shaded client work
[ https://issues.apache.org/jira/browse/HADOOP-15139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham resolved HADOOP-15139. - Resolution: Fixed Target Version/s: 3.1.1, 3.2.0 (was: 3.2.0) > [Umbrella] Improvements and fixes for Hadoop shaded client work > > > Key: HADOOP-15139 > URL: https://issues.apache.org/jira/browse/HADOOP-15139 > Project: Hadoop Common > Issue Type: Bug >Reporter: Junping Du >Assignee: Bharat Viswanadham >Priority: Critical > > In HADOOP-11656, we have made great progress in splitting out third-party > dependencies from shaded hadoop client jar (hadoop-client-api), put runtime > dependencies in hadoop-client-runtime, and have shaded version of > hadoop-client-minicluster for test. However, there are still some left work > for this feature to be fully completed: > - We don't have a comprehensive documentation to guide downstream > projects/users to use shaded JARs instead of previous JARs > - We should consider to wrap up hadoop tools (distcp, aws, azure) to have > shaded version > - More issues could be identified when shaded jars are adopted in more test > and production environment, like HADOOP-15137. > Let's have this umbrella JIRA to track all efforts that left to improve > hadoop shaded client effort. > CC [~busbey], [~bharatviswa] and [~vinodkv]. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15139) [Umbrella] Improvements and fixes for Hadoop shaded client work
[ https://issues.apache.org/jira/browse/HADOOP-15139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16611274#comment-16611274 ] Bharat Viswanadham commented on HADOOP-15139: - [~sunilg] Sorry for late reply, I was on vacation, just came back today. Yes, this can be closed. > [Umbrella] Improvements and fixes for Hadoop shaded client work > > > Key: HADOOP-15139 > URL: https://issues.apache.org/jira/browse/HADOOP-15139 > Project: Hadoop Common > Issue Type: Bug >Reporter: Junping Du >Assignee: Bharat Viswanadham >Priority: Critical > > In HADOOP-11656, we have made great progress in splitting out third-party > dependencies from shaded hadoop client jar (hadoop-client-api), put runtime > dependencies in hadoop-client-runtime, and have shaded version of > hadoop-client-minicluster for test. However, there are still some left work > for this feature to be fully completed: > - We don't have a comprehensive documentation to guide downstream > projects/users to use shaded JARs instead of previous JARs > - We should consider to wrap up hadoop tools (distcp, aws, azure) to have > shaded version > - More issues could be identified when shaded jars are adopted in more test > and production environment, like HADOOP-15137. > Let's have this umbrella JIRA to track all efforts that left to improve > hadoop shaded client effort. > CC [~busbey], [~bharatviswa] and [~vinodkv]. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15514) NoClassDefFoundError of TimelineCollectorManager when starting MiniCluster
[ https://issues.apache.org/jira/browse/HADOOP-15514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-15514: Issue Type: Sub-task (was: Bug) Parent: HADOOP-15139 > NoClassDefFoundError of TimelineCollectorManager when starting MiniCluster > -- > > Key: HADOOP-15514 > URL: https://issues.apache.org/jira/browse/HADOOP-15514 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Jeff Zhang >Priority: Major > > {code:java} > org.apache.hadoop.yarn.exceptions.YarnRuntimeException: > java.lang.NoClassDefFoundError: > org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorManager > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:763) > at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) > at java.net.URLClassLoader.access$100(URLClassLoader.java:73) > at java.net.URLClassLoader$1.run(URLClassLoader.java:368) > at java.net.URLClassLoader$1.run(URLClassLoader.java:362) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:361) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.getDeclaredMethods0(Native Method) > at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) > at java.lang.Class.getDeclaredMethods(Class.java:1975){code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15514) NoClassDefFoundError of TimelineCollectorManager when starting MiniCluster
[ https://issues.apache.org/jira/browse/HADOOP-15514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-15514: Affects Version/s: 3.0.0 > NoClassDefFoundError of TimelineCollectorManager when starting MiniCluster > -- > > Key: HADOOP-15514 > URL: https://issues.apache.org/jira/browse/HADOOP-15514 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Jeff Zhang >Priority: Major > > {code:java} > org.apache.hadoop.yarn.exceptions.YarnRuntimeException: > java.lang.NoClassDefFoundError: > org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorManager > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:763) > at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) > at java.net.URLClassLoader.access$100(URLClassLoader.java:73) > at java.net.URLClassLoader$1.run(URLClassLoader.java:368) > at java.net.URLClassLoader$1.run(URLClassLoader.java:362) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:361) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.getDeclaredMethods0(Native Method) > at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) > at java.lang.Class.getDeclaredMethods(Class.java:1975){code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15137) ClassNotFoundException: org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using hadoop-client-minicluster
[ https://issues.apache.org/jira/browse/HADOOP-15137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-15137: Fix Version/s: 3.1.1 3.2.0 > ClassNotFoundException: > org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using > hadoop-client-minicluster > -- > > Key: HADOOP-15137 > URL: https://issues.apache.org/jira/browse/HADOOP-15137 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Jeff Zhang >Assignee: Bharat Viswanadham >Priority: Major > Fix For: 3.2.0, 3.1.1 > > Attachments: HADOOP-15137.01.patch, HADOOP-15137.02.patch, > YARN-7673.00.patch > > > I'd like to use hadoop-client-minicluster for hadoop downstream project, but > I encounter the following exception when starting hadoop minicluster. And I > check the hadoop-client-minicluster, it indeed does not have this class. Is > this something that is missing when packaging the published jar ? > {code} > java.lang.NoClassDefFoundError: > org/apache/hadoop/yarn/server/api/DistributedSchedulingAMProtocol > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:763) > at > java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) > at java.net.URLClassLoader.access$100(URLClassLoader.java:73) > at java.net.URLClassLoader$1.run(URLClassLoader.java:368) > at java.net.URLClassLoader$1.run(URLClassLoader.java:362) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:361) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.yarn.server.MiniYARNCluster.createResourceManager(MiniYARNCluster.java:851) > at > org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:285) > at > org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15137) ClassNotFoundException: org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using hadoop-client-minicluster
[ https://issues.apache.org/jira/browse/HADOOP-15137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-15137: Target Version/s: (was: 3.2.0, 3.1.1) > ClassNotFoundException: > org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using > hadoop-client-minicluster > -- > > Key: HADOOP-15137 > URL: https://issues.apache.org/jira/browse/HADOOP-15137 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Jeff Zhang >Assignee: Bharat Viswanadham >Priority: Major > Fix For: 3.2.0, 3.1.1 > > Attachments: HADOOP-15137.01.patch, HADOOP-15137.02.patch, > YARN-7673.00.patch > > > I'd like to use hadoop-client-minicluster for hadoop downstream project, but > I encounter the following exception when starting hadoop minicluster. And I > check the hadoop-client-minicluster, it indeed does not have this class. Is > this something that is missing when packaging the published jar ? > {code} > java.lang.NoClassDefFoundError: > org/apache/hadoop/yarn/server/api/DistributedSchedulingAMProtocol > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:763) > at > java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) > at java.net.URLClassLoader.access$100(URLClassLoader.java:73) > at java.net.URLClassLoader$1.run(URLClassLoader.java:368) > at java.net.URLClassLoader$1.run(URLClassLoader.java:362) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:361) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.yarn.server.MiniYARNCluster.createResourceManager(MiniYARNCluster.java:851) > at > org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:285) > at > org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15137) ClassNotFoundException: org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using hadoop-client-minicluster
[ https://issues.apache.org/jira/browse/HADOOP-15137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501056#comment-16501056 ] Bharat Viswanadham commented on HADOOP-15137: - Committed to branch-3, branch-3.1, and trunk. > ClassNotFoundException: > org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using > hadoop-client-minicluster > -- > > Key: HADOOP-15137 > URL: https://issues.apache.org/jira/browse/HADOOP-15137 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Jeff Zhang >Assignee: Bharat Viswanadham >Priority: Major > Fix For: 3.2.0, 3.1.1 > > Attachments: HADOOP-15137.01.patch, HADOOP-15137.02.patch, > YARN-7673.00.patch > > > I'd like to use hadoop-client-minicluster for hadoop downstream project, but > I encounter the following exception when starting hadoop minicluster. And I > check the hadoop-client-minicluster, it indeed does not have this class. Is > this something that is missing when packaging the published jar ? > {code} > java.lang.NoClassDefFoundError: > org/apache/hadoop/yarn/server/api/DistributedSchedulingAMProtocol > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:763) > at > java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) > at java.net.URLClassLoader.access$100(URLClassLoader.java:73) > at java.net.URLClassLoader$1.run(URLClassLoader.java:368) > at java.net.URLClassLoader$1.run(URLClassLoader.java:362) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:361) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.yarn.server.MiniYARNCluster.createResourceManager(MiniYARNCluster.java:851) > at > org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:285) > at > org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15137) ClassNotFoundException: org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using hadoop-client-minicluster
[ https://issues.apache.org/jira/browse/HADOOP-15137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-15137: Resolution: Fixed Target Version/s: 3.2.0, 3.1.1 Status: Resolved (was: Patch Available) > ClassNotFoundException: > org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using > hadoop-client-minicluster > -- > > Key: HADOOP-15137 > URL: https://issues.apache.org/jira/browse/HADOOP-15137 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Jeff Zhang >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HADOOP-15137.01.patch, HADOOP-15137.02.patch, > YARN-7673.00.patch > > > I'd like to use hadoop-client-minicluster for hadoop downstream project, but > I encounter the following exception when starting hadoop minicluster. And I > check the hadoop-client-minicluster, it indeed does not have this class. Is > this something that is missing when packaging the published jar ? > {code} > java.lang.NoClassDefFoundError: > org/apache/hadoop/yarn/server/api/DistributedSchedulingAMProtocol > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:763) > at > java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) > at java.net.URLClassLoader.access$100(URLClassLoader.java:73) > at java.net.URLClassLoader$1.run(URLClassLoader.java:368) > at java.net.URLClassLoader$1.run(URLClassLoader.java:362) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:361) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.yarn.server.MiniYARNCluster.createResourceManager(MiniYARNCluster.java:851) > at > org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:285) > at > org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15137) ClassNotFoundException: org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using hadoop-client-minicluster
[ https://issues.apache.org/jira/browse/HADOOP-15137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500978#comment-16500978 ] Bharat Viswanadham edited comment on HADOOP-15137 at 6/4/18 10:43 PM: -- Thank You [~jnp] for review. I will commit this patch shortly. was (Author: bharatviswa): I will commit this patch shortly. > ClassNotFoundException: > org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using > hadoop-client-minicluster > -- > > Key: HADOOP-15137 > URL: https://issues.apache.org/jira/browse/HADOOP-15137 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Jeff Zhang >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HADOOP-15137.01.patch, HADOOP-15137.02.patch, > YARN-7673.00.patch > > > I'd like to use hadoop-client-minicluster for hadoop downstream project, but > I encounter the following exception when starting hadoop minicluster. And I > check the hadoop-client-minicluster, it indeed does not have this class. Is > this something that is missing when packaging the published jar ? > {code} > java.lang.NoClassDefFoundError: > org/apache/hadoop/yarn/server/api/DistributedSchedulingAMProtocol > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:763) > at > java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) > at java.net.URLClassLoader.access$100(URLClassLoader.java:73) > at java.net.URLClassLoader$1.run(URLClassLoader.java:368) > at java.net.URLClassLoader$1.run(URLClassLoader.java:362) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:361) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.yarn.server.MiniYARNCluster.createResourceManager(MiniYARNCluster.java:851) > at > org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:285) > at > org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15137) ClassNotFoundException: org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using hadoop-client-minicluster
[ https://issues.apache.org/jira/browse/HADOOP-15137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500978#comment-16500978 ] Bharat Viswanadham commented on HADOOP-15137: - I will commit this patch shortly. > ClassNotFoundException: > org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using > hadoop-client-minicluster > -- > > Key: HADOOP-15137 > URL: https://issues.apache.org/jira/browse/HADOOP-15137 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Jeff Zhang >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HADOOP-15137.01.patch, HADOOP-15137.02.patch, > YARN-7673.00.patch > > > I'd like to use hadoop-client-minicluster for hadoop downstream project, but > I encounter the following exception when starting hadoop minicluster. And I > check the hadoop-client-minicluster, it indeed does not have this class. Is > this something that is missing when packaging the published jar ? > {code} > java.lang.NoClassDefFoundError: > org/apache/hadoop/yarn/server/api/DistributedSchedulingAMProtocol > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:763) > at > java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) > at java.net.URLClassLoader.access$100(URLClassLoader.java:73) > at java.net.URLClassLoader$1.run(URLClassLoader.java:368) > at java.net.URLClassLoader$1.run(URLClassLoader.java:362) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:361) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.yarn.server.MiniYARNCluster.createResourceManager(MiniYARNCluster.java:851) > at > org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:285) > at > org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15137) ClassNotFoundException: org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using hadoop-client-minicluster
[ https://issues.apache.org/jira/browse/HADOOP-15137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497528#comment-16497528 ] Bharat Viswanadham commented on HADOOP-15137: - [~rohithsharma] branch-3, branch-3.1 and trunk. > ClassNotFoundException: > org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using > hadoop-client-minicluster > -- > > Key: HADOOP-15137 > URL: https://issues.apache.org/jira/browse/HADOOP-15137 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Jeff Zhang >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HADOOP-15137.01.patch, HADOOP-15137.02.patch, > YARN-7673.00.patch > > > I'd like to use hadoop-client-minicluster for hadoop downstream project, but > I encounter the following exception when starting hadoop minicluster. And I > check the hadoop-client-minicluster, it indeed does not have this class. Is > this something that is missing when packaging the published jar ? > {code} > java.lang.NoClassDefFoundError: > org/apache/hadoop/yarn/server/api/DistributedSchedulingAMProtocol > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:763) > at > java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) > at java.net.URLClassLoader.access$100(URLClassLoader.java:73) > at java.net.URLClassLoader$1.run(URLClassLoader.java:368) > at java.net.URLClassLoader$1.run(URLClassLoader.java:362) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:361) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.yarn.server.MiniYARNCluster.createResourceManager(MiniYARNCluster.java:851) > at > org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:285) > at > org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15490) Multiple declaration of maven-enforcer-plugin found in pom.xml
[ https://issues.apache.org/jira/browse/HADOOP-15490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496938#comment-16496938 ] Bharat Viswanadham commented on HADOOP-15490: - Thank You [~nandakumar131] for contribution. I have committed this to trunk. > Multiple declaration of maven-enforcer-plugin found in pom.xml > -- > > Key: HADOOP-15490 > URL: https://issues.apache.org/jira/browse/HADOOP-15490 > Project: Hadoop Common > Issue Type: Bug >Reporter: Nanda kumar >Assignee: Nanda kumar >Priority: Minor > Fix For: 3.2.0 > > Attachments: HADOOP-15490.000.patch > > > Multiple declaration of {{maven-enforcer-plugin}} in {{pom.xml}} is causing > the below warning during build. > {noformat} > [WARNING] Some problems were encountered while building the effective model > for org.apache.hadoop:hadoop-project:pom:3.2.0-SNAPSHOT > [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but > found duplicate declaration of plugin > org.apache.maven.plugins:maven-enforcer-plugin @ > org.apache.hadoop:hadoop-main:3.2.0-SNAPSHOT, > /Users/nvadivelu/codebase/apache/hadoop/pom.xml, line 431, column 15 > [WARNING] > [WARNING] Some problems were encountered while building the effective model > for org.apache.hadoop:hadoop-main:pom:3.2.0-SNAPSHOT > [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but > found duplicate declaration of plugin > org.apache.maven.plugins:maven-enforcer-plugin @ line 431, column 15 > [WARNING] > [WARNING] It is highly recommended to fix these problems because they > threaten the stability of your build. > [WARNING] > [WARNING] For this reason, future Maven versions might no longer support > building such malformed projects. > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15490) Multiple declaration of maven-enforcer-plugin found in pom.xml
[ https://issues.apache.org/jira/browse/HADOOP-15490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-15490: Resolution: Fixed Fix Version/s: 3.2.0 Status: Resolved (was: Patch Available) > Multiple declaration of maven-enforcer-plugin found in pom.xml > -- > > Key: HADOOP-15490 > URL: https://issues.apache.org/jira/browse/HADOOP-15490 > Project: Hadoop Common > Issue Type: Bug >Reporter: Nanda kumar >Assignee: Nanda kumar >Priority: Minor > Fix For: 3.2.0 > > Attachments: HADOOP-15490.000.patch > > > Multiple declaration of {{maven-enforcer-plugin}} in {{pom.xml}} is causing > the below warning during build. > {noformat} > [WARNING] Some problems were encountered while building the effective model > for org.apache.hadoop:hadoop-project:pom:3.2.0-SNAPSHOT > [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but > found duplicate declaration of plugin > org.apache.maven.plugins:maven-enforcer-plugin @ > org.apache.hadoop:hadoop-main:3.2.0-SNAPSHOT, > /Users/nvadivelu/codebase/apache/hadoop/pom.xml, line 431, column 15 > [WARNING] > [WARNING] Some problems were encountered while building the effective model > for org.apache.hadoop:hadoop-main:pom:3.2.0-SNAPSHOT > [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but > found duplicate declaration of plugin > org.apache.maven.plugins:maven-enforcer-plugin @ line 431, column 15 > [WARNING] > [WARNING] It is highly recommended to fix these problems because they > threaten the stability of your build. > [WARNING] > [WARNING] For this reason, future Maven versions might no longer support > building such malformed projects. > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15490) Multiple declaration of maven-enforcer-plugin found in pom.xml
[ https://issues.apache.org/jira/browse/HADOOP-15490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496922#comment-16496922 ] Bharat Viswanadham edited comment on HADOOP-15490 at 5/31/18 6:01 PM: -- Hi [~nandakumar131] Thank You for reporting and providing the patch. I built the code using the patch, now the Warning's are not appearing during the build. LGTM +1. ASF license warning is not related to this patch. Will commit this shortly. was (Author: bharatviswa): Hi [~nandakumar131] Thank You for reporting and providing the patch. I built the code using the patch, now the Warning's are not appearing during the build. LGTM +1. Will commit this shortly. > Multiple declaration of maven-enforcer-plugin found in pom.xml > -- > > Key: HADOOP-15490 > URL: https://issues.apache.org/jira/browse/HADOOP-15490 > Project: Hadoop Common > Issue Type: Bug >Reporter: Nanda kumar >Assignee: Nanda kumar >Priority: Minor > Attachments: HADOOP-15490.000.patch > > > Multiple declaration of {{maven-enforcer-plugin}} in {{pom.xml}} is causing > the below warning during build. > {noformat} > [WARNING] Some problems were encountered while building the effective model > for org.apache.hadoop:hadoop-project:pom:3.2.0-SNAPSHOT > [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but > found duplicate declaration of plugin > org.apache.maven.plugins:maven-enforcer-plugin @ > org.apache.hadoop:hadoop-main:3.2.0-SNAPSHOT, > /Users/nvadivelu/codebase/apache/hadoop/pom.xml, line 431, column 15 > [WARNING] > [WARNING] Some problems were encountered while building the effective model > for org.apache.hadoop:hadoop-main:pom:3.2.0-SNAPSHOT > [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but > found duplicate declaration of plugin > org.apache.maven.plugins:maven-enforcer-plugin @ line 431, column 15 > [WARNING] > [WARNING] It is highly recommended to fix these problems because they > threaten the stability of your build. > [WARNING] > [WARNING] For this reason, future Maven versions might no longer support > building such malformed projects. > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15490) Multiple declaration of maven-enforcer-plugin found in pom.xml
[ https://issues.apache.org/jira/browse/HADOOP-15490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496922#comment-16496922 ] Bharat Viswanadham commented on HADOOP-15490: - Hi [~nandakumar131] Thank You for reporting and providing the patch. I built the code using the patch, now the Warning's are not appearing during the build. LGTM +1. Will commit this shortly. > Multiple declaration of maven-enforcer-plugin found in pom.xml > -- > > Key: HADOOP-15490 > URL: https://issues.apache.org/jira/browse/HADOOP-15490 > Project: Hadoop Common > Issue Type: Bug >Reporter: Nanda kumar >Assignee: Nanda kumar >Priority: Minor > Attachments: HADOOP-15490.000.patch > > > Multiple declaration of {{maven-enforcer-plugin}} in {{pom.xml}} is causing > the below warning during build. > {noformat} > [WARNING] Some problems were encountered while building the effective model > for org.apache.hadoop:hadoop-project:pom:3.2.0-SNAPSHOT > [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but > found duplicate declaration of plugin > org.apache.maven.plugins:maven-enforcer-plugin @ > org.apache.hadoop:hadoop-main:3.2.0-SNAPSHOT, > /Users/nvadivelu/codebase/apache/hadoop/pom.xml, line 431, column 15 > [WARNING] > [WARNING] Some problems were encountered while building the effective model > for org.apache.hadoop:hadoop-main:pom:3.2.0-SNAPSHOT > [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but > found duplicate declaration of plugin > org.apache.maven.plugins:maven-enforcer-plugin @ line 431, column 15 > [WARNING] > [WARNING] It is highly recommended to fix these problems because they > threaten the stability of your build. > [WARNING] > [WARNING] For this reason, future Maven versions might no longer support > building such malformed projects. > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15402) Prevent double logout of UGI's LoginContext
[ https://issues.apache.org/jira/browse/HADOOP-15402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446044#comment-16446044 ] Bharat Viswanadham commented on HADOOP-15402: - +1 LGTM. > Prevent double logout of UGI's LoginContext > --- > > Key: HADOOP-15402 > URL: https://issues.apache.org/jira/browse/HADOOP-15402 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 3.1.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Major > Attachments: HADOOP-15402.patch > > > HADOOP-15294 worked around a LoginContext NPE resulting from a double logout > by peering into the Subject. A cleaner fix is tracking whether the > LoginContext is logged in. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user
[ https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-12953: Attachment: HADOOP-12953.004.patch > New API for libhdfs to get FileSystem object as a proxy user > > > Key: HADOOP-12953 > URL: https://issues.apache.org/jira/browse/HADOOP-12953 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.2 >Reporter: Uday Kale >Assignee: Uday Kale >Priority: Major > Attachments: HADOOP-12953.001.patch, HADOOP-12953.002.patch, > HADOOP-12953.003.patch, HADOOP-12953.004.patch > > > Secure impersonation in HDFS needs users to create proxy users and work with > those. In libhdfs, the hdfsBuilder accepts a userName but calls > FileSytem.get() or FileSystem.newInstance() with the user name to connect as. > But, both these interfaces use getBestUGI() to get the UGI for the given > user. This is not necessarily true for all services whose end-users would not > access HDFS directly, but go via the service to first get authenticated with > LDAP, then the service owner can impersonate the end-user to eventually > provide the underlying data. > For such services that authenticate end-users via LDAP, the end users are not > authenticated by Kerberos, so their authentication details wont be in the > Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this > either. > Hence the need for the new API for libhdfs to get the FileSystem object as a > proxy user using the 'secure impersonation' recommendations. This approach is > secure since HDFS authenticates the service owner and then validates the > right for the service owner to impersonate the given user as allowed by > hadoop.proxyusers.* parameters of HDFS config. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user
[ https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-12953: Attachment: (was: HADOOP-12953.004.patch) > New API for libhdfs to get FileSystem object as a proxy user > > > Key: HADOOP-12953 > URL: https://issues.apache.org/jira/browse/HADOOP-12953 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.2 >Reporter: Uday Kale >Assignee: Uday Kale >Priority: Major > Attachments: HADOOP-12953.001.patch, HADOOP-12953.002.patch, > HADOOP-12953.003.patch > > > Secure impersonation in HDFS needs users to create proxy users and work with > those. In libhdfs, the hdfsBuilder accepts a userName but calls > FileSytem.get() or FileSystem.newInstance() with the user name to connect as. > But, both these interfaces use getBestUGI() to get the UGI for the given > user. This is not necessarily true for all services whose end-users would not > access HDFS directly, but go via the service to first get authenticated with > LDAP, then the service owner can impersonate the end-user to eventually > provide the underlying data. > For such services that authenticate end-users via LDAP, the end users are not > authenticated by Kerberos, so their authentication details wont be in the > Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this > either. > Hence the need for the new API for libhdfs to get the FileSystem object as a > proxy user using the 'secure impersonation' recommendations. This approach is > secure since HDFS authenticates the service owner and then validates the > right for the service owner to impersonate the given user as allowed by > hadoop.proxyusers.* parameters of HDFS config. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user
[ https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444672#comment-16444672 ] Bharat Viswanadham commented on HADOOP-12953: - Thank You [~arpitagarwal] for review. {quote}We probably need to add hdfsBuilderSetCreateProxyUser to hdfs.h, hdfs_shim, libhdfs_wapper_defines.h etc. {quote} Added in hdfs.h, this patch is only taken care of change in hdfs c client. For further changes to libhdfs c++, it can be taken care in a new jira. {quote}Also it may be helpful to define a new method hdfsConnectAsProxyUser, similar to hdfsConnectAsUser. {quote} As old methods are deprecated, so not added similar method for proxyUser. {quote}Nitpick: single statement if/else blocks should still have curly braces. e.g. here: {quote} {code:java} if (bld->createProxyUser) methodToCall = "newInstanceAsProxyUser"; else methodToCall = "newInstance";{code} Addressed this. > New API for libhdfs to get FileSystem object as a proxy user > > > Key: HADOOP-12953 > URL: https://issues.apache.org/jira/browse/HADOOP-12953 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.2 >Reporter: Uday Kale >Assignee: Uday Kale >Priority: Major > Attachments: HADOOP-12953.001.patch, HADOOP-12953.002.patch, > HADOOP-12953.003.patch, HADOOP-12953.004.patch > > > Secure impersonation in HDFS needs users to create proxy users and work with > those. In libhdfs, the hdfsBuilder accepts a userName but calls > FileSytem.get() or FileSystem.newInstance() with the user name to connect as. > But, both these interfaces use getBestUGI() to get the UGI for the given > user. This is not necessarily true for all services whose end-users would not > access HDFS directly, but go via the service to first get authenticated with > LDAP, then the service owner can impersonate the end-user to eventually > provide the underlying data. > For such services that authenticate end-users via LDAP, the end users are not > authenticated by Kerberos, so their authentication details wont be in the > Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this > either. > Hence the need for the new API for libhdfs to get the FileSystem object as a > proxy user using the 'secure impersonation' recommendations. This approach is > secure since HDFS authenticates the service owner and then validates the > right for the service owner to impersonate the given user as allowed by > hadoop.proxyusers.* parameters of HDFS config. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user
[ https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-12953: Attachment: HADOOP-12953.004.patch > New API for libhdfs to get FileSystem object as a proxy user > > > Key: HADOOP-12953 > URL: https://issues.apache.org/jira/browse/HADOOP-12953 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.2 >Reporter: Uday Kale >Assignee: Uday Kale >Priority: Major > Attachments: HADOOP-12953.001.patch, HADOOP-12953.002.patch, > HADOOP-12953.003.patch, HADOOP-12953.004.patch > > > Secure impersonation in HDFS needs users to create proxy users and work with > those. In libhdfs, the hdfsBuilder accepts a userName but calls > FileSytem.get() or FileSystem.newInstance() with the user name to connect as. > But, both these interfaces use getBestUGI() to get the UGI for the given > user. This is not necessarily true for all services whose end-users would not > access HDFS directly, but go via the service to first get authenticated with > LDAP, then the service owner can impersonate the end-user to eventually > provide the underlying data. > For such services that authenticate end-users via LDAP, the end users are not > authenticated by Kerberos, so their authentication details wont be in the > Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this > either. > Hence the need for the new API for libhdfs to get the FileSystem object as a > proxy user using the 'secure impersonation' recommendations. This approach is > secure since HDFS authenticates the service owner and then validates the > right for the service owner to impersonate the given user as allowed by > hadoop.proxyusers.* parameters of HDFS config. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15379) Make IrqHandler.bind() public
[ https://issues.apache.org/jira/browse/HADOOP-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-15379: Hadoop Flags: Reviewed > Make IrqHandler.bind() public > - > > Key: HADOOP-15379 > URL: https://issues.apache.org/jira/browse/HADOOP-15379 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 3.1.0 >Reporter: Steve Loughran >Assignee: Ajay Kumar >Priority: Minor > Fix For: 3.2.0, 3.1.1 > > Attachments: HADOOP-15379.00.patch > > > {{org.apache.hadoop.service.launcher.IrqHandler.bind()}} is package private > this means you can create an {{Interrupted}} handler in a different package, > but you can't bind it to a signal. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15379) Make IrqHandler.bind() public
[ https://issues.apache.org/jira/browse/HADOOP-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436807#comment-16436807 ] Bharat Viswanadham commented on HADOOP-15379: - Committed this to trunk and branch-3.1. Thank You [~ajayydv] for working on this and [~ste...@apache.org] for reporting this. > Make IrqHandler.bind() public > - > > Key: HADOOP-15379 > URL: https://issues.apache.org/jira/browse/HADOOP-15379 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 3.1.0 >Reporter: Steve Loughran >Assignee: Ajay Kumar >Priority: Minor > Fix For: 3.2.0, 3.1.1 > > Attachments: HADOOP-15379.00.patch > > > {{org.apache.hadoop.service.launcher.IrqHandler.bind()}} is package private > this means you can create an {{Interrupted}} handler in a different package, > but you can't bind it to a signal. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15379) Make IrqHandler.bind() public
[ https://issues.apache.org/jira/browse/HADOOP-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-15379: Fix Version/s: 3.1.1 3.2.0 > Make IrqHandler.bind() public > - > > Key: HADOOP-15379 > URL: https://issues.apache.org/jira/browse/HADOOP-15379 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 3.1.0 >Reporter: Steve Loughran >Assignee: Ajay Kumar >Priority: Minor > Fix For: 3.2.0, 3.1.1 > > Attachments: HADOOP-15379.00.patch > > > {{org.apache.hadoop.service.launcher.IrqHandler.bind()}} is package private > this means you can create an {{Interrupted}} handler in a different package, > but you can't bind it to a signal. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15379) Make IrqHandler.bind() public
[ https://issues.apache.org/jira/browse/HADOOP-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-15379: Resolution: Fixed Target Version/s: 3.2.0, 3.1.1 (was: 3.2.0) Status: Resolved (was: Patch Available) > Make IrqHandler.bind() public > - > > Key: HADOOP-15379 > URL: https://issues.apache.org/jira/browse/HADOOP-15379 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 3.1.0 >Reporter: Steve Loughran >Assignee: Ajay Kumar >Priority: Minor > Attachments: HADOOP-15379.00.patch > > > {{org.apache.hadoop.service.launcher.IrqHandler.bind()}} is package private > this means you can create an {{Interrupted}} handler in a different package, > but you can't bind it to a signal. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15379) Make IrqHandler.bind() public
[ https://issues.apache.org/jira/browse/HADOOP-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436800#comment-16436800 ] Bharat Viswanadham commented on HADOOP-15379: - I will commit this shortly. > Make IrqHandler.bind() public > - > > Key: HADOOP-15379 > URL: https://issues.apache.org/jira/browse/HADOOP-15379 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 3.1.0 >Reporter: Steve Loughran >Assignee: Ajay Kumar >Priority: Minor > Attachments: HADOOP-15379.00.patch > > > {{org.apache.hadoop.service.launcher.IrqHandler.bind()}} is package private > this means you can create an {{Interrupted}} handler in a different package, > but you can't bind it to a signal. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15369) Avoid usage of ${project.version} in parent poms
[ https://issues.apache.org/jira/browse/HADOOP-15369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436060#comment-16436060 ] Bharat Viswanadham commented on HADOOP-15369: - LGTM. Need to add documentation info, for the additional step where we need to modify hadoop.version to project.version. +1, after the above, is fixed. > Avoid usage of ${project.version} in parent poms > > > Key: HADOOP-15369 > URL: https://issues.apache.org/jira/browse/HADOOP-15369 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.2.0 >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Attachments: HADOOP-15369-trnk.001.patch > > > hadoop-project/pom.xml and hadoop-project-dist/pom.xml use > _${project.version}_ variable in dependencyManagement and plugin dependencies. > Unfortunatelly it could not work if we use different version in a child > project as ${project.version} variable is resolved *after* the inheritance. > From [maven > doc|https://maven.apache.org/guides/introduction/introduction-to-the-pom.html#Project_Inheritance]: > {quote} > For example, to access the project.version variable, you would reference it > like so: > ${project.version} > One factor to note is that these variables are processed after inheritance as > outlined above. This means that if a parent project uses a variable, then its > definition in the child, not the parent, will be the one eventually used. > {quote} > The community voted to keep ozone in-tree but use a different release cycle. > To achieve this we need different version for selected subproject therefor we > can't use ${project.version} any more. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org